Re: Dedicated readme.io documentation for Ignite integrations

2016-11-30 Thread Dmitriy Setrakyan
On Thu, Dec 1, 2016 at 10:28 AM, Denis Magda  wrote:

> I don't say we need it but rather mean that it’s still being delivered as
> a part of the distribution.
>
> Let’s sweep it out once the Web Console based approach is documented
> https://issues.apache.org/jira/browse/IGNITE-4349 <
> https://issues.apache.org/jira/browse/IGNITE-4349>
>

Sounds good, thanks!


>
> —
> Denis
>
> > On Nov 30, 2016, at 9:51 PM, Dmitriy Setrakyan 
> wrote:
> >
> >>
> >> Yes, sure, the schema-import utility is still being used and maintained.
> >>
> >
> > Why do we need it?
>
>


Re: Dedicated readme.io documentation for Ignite integrations

2016-11-30 Thread Denis Magda
I don't say we need it but rather mean that it’s still being delivered as a 
part of the distribution.

Let’s sweep it out once the Web Console based approach is documented
https://issues.apache.org/jira/browse/IGNITE-4349 


—
Denis

> On Nov 30, 2016, at 9:51 PM, Dmitriy Setrakyan  wrote:
> 
>> 
>> Yes, sure, the schema-import utility is still being used and maintained.
>> 
> 
> Why do we need it?



[jira] [Created] (IGNITE-4349) Discontinue the schema-import utility

2016-11-30 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-4349:
---

 Summary: Discontinue the schema-import utility
 Key: IGNITE-4349
 URL: https://issues.apache.org/jira/browse/IGNITE-4349
 Project: Ignite
  Issue Type: Task
Reporter: Denis Magda
Assignee: Alexey Kuznetsov
 Fix For: 2.0


Let's discontinue the maintenance of the schema-import utility in favor of Web 
Console that has the same capability.

The schema-import utility should be removed from sources once the following Web 
Console documentation is added 
https://issues.apache.org/jira/browse/IGNITE-4348



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ignite-spark module in Hadoop Accelerator

2016-11-30 Thread Dmitriy Setrakyan
Guys,

I just downloaded the hadoop accelerator and here are the differences from
the fabric edition that jump at me right away:

   - the "bin/" folder has "setup-hadoop" scripts
   - the "config/" folder has "hadoop" subfolder with necessary
   hadoop-related configuration
   - the "lib/" folder has much fewer libraries that in fabric, simply
   becomes many dependencies don't make sense for hadoop environment

I currently don't see how we can merge the hadoop accelerator with standard
fabric edition.

D.

On Thu, Dec 1, 2016 at 9:54 AM, Denis Magda  wrote:

> Vovan,
>
> As one of hadoop maintainers, please share your point of view on this.
>
> —
> Denis
>
> > On Nov 30, 2016, at 10:49 PM, Sergey Kozlov 
> wrote:
> >
> > Denis
> >
> > I agree that at the moment there's no reason to split into fabric and
> > hadoop editions.
> >
> > On Thu, Dec 1, 2016 at 4:45 AM, Denis Magda  wrote:
> >
> >> Hadoop Accelerator doesn’t require any additional libraries in compare
> to
> >> those we have in the fabric build. It only lacks some of them as Val
> >> mentioned below.
> >>
> >> Wouldn’t it better to discontinue Hadoop Accelerator edition and simply
> >> deliver hadoop jar and its configs as a part of the fabric?
> >>
> >> —
> >> Denis
> >>
> >>> On Nov 27, 2016, at 3:12 PM, Dmitriy Setrakyan 
> >> wrote:
> >>>
> >>> Separate edition for the Hadoop Accelerator was primarily driven by the
> >>> default libraries. Hadoop Accelerator requires many more libraries as
> >> well
> >>> as configuration settings compared to the standard fabric download.
> >>>
> >>> Now, as far as spark integration is concerned, I am not sure which
> >> edition
> >>> it belongs in, Hadoop Accelerator or standard fabric.
> >>>
> >>> D.
> >>>
> >>> On Sat, Nov 26, 2016 at 7:39 PM, Denis Magda 
> wrote:
> >>>
>  *Dmitriy*,
> 
>  I do believe that you should know why the community decided to a
> >> separate
>  edition for the Hadoop Accelerator. What was the reason for that?
>  Presently, as I see, it brings more confusion and difficulties rather
> >> then
>  benefit.
> 
>  —
>  Denis
> 
>  On Nov 26, 2016, at 2:14 PM, Konstantin Boudnik 
> wrote:
> 
>  In fact I am very much agree with you. Right now, running the
> >> "accelerator"
>  component in Bigtop disto gives one a pretty much complete fabric
> >> anyway.
>  But
>  in order to make just an accelerator component we perform quite a bit
> of
>  woodoo magic during the packaging stage of the Bigtop build, shuffling
> >> jars
>  from here and there. And that's quite crazy, honestly ;)
> 
>  Cos
> 
>  On Mon, Nov 21, 2016 at 03:33PM, Valentin Kulichenko wrote:
> 
>  I tend to agree with Denis. I see only these differences between
> Hadoop
>  Accelerator and Fabric builds (correct me if I miss something):
> 
>  - Limited set of available modules and no optional modules in Hadoop
>  Accelerator.
>  - No ignite-hadoop module in Fabric.
>  - Additional scripts, configs and instructions included in Hadoop
>  Accelerator.
> 
>  And the list of included modules frankly looks very weird. Here are
> only
>  some of the issues I noticed:
> 
>  - ignite-indexing and ignite-spark are mandatory. Even if we need them
>  for Hadoop Acceleration (which I doubt), are they really required or
> >> can
>  be
>  optional?
>  - We force to use ignite-log4j module without providing other logger
>  options (e.g., SLF).
>  - We don't include ignite-aws module. How to use Hadoop Accelerator
> >> with
>  S3 discovery?
>  - Etc.
> 
>  It seems to me that if we try to fix all this issue, there will be
>  virtually no difference between Fabric and Hadoop Accelerator builds
> >> except
>  couple of scripts and config files. If so, there is no reason to have
> >> two
>  builds.
> 
>  -Val
> 
>  On Mon, Nov 21, 2016 at 3:13 PM, Denis Magda 
> wrote:
> 
>  On the separate note, in the Bigtop, we start looking into changing
> the
> 
>  way we
> 
>  deliver Ignite and we'll likely to start offering the whole 'data
> >> fabric'
>  experience instead of the mere "hadoop-acceleration”.
> 
> 
>  And you still will be using hadoop-accelerator libs of Ignite, right?
> 
>  I’m thinking of if there is a need to keep releasing Hadoop
> Accelerator
> >> as
>  a separate delivery.
>  What if we start releasing the accelerator as a part of the standard
>  fabric binary putting hadoop-accelerator libs under ‘optional’ folder?
> 
>  —
>  Denis
> 
>  On Nov 21, 2016, at 12:19 PM, Konstantin Boudnik 
> >> wrote:
> 
>  What Denis said: spark has been added to the Hadoop accelerator as a
> way
> 

Re: ignite-spark module in Hadoop Accelerator

2016-11-30 Thread Denis Magda
Vovan,

As one of hadoop maintainers, please share your point of view on this.

—
Denis

> On Nov 30, 2016, at 10:49 PM, Sergey Kozlov  wrote:
> 
> Denis
> 
> I agree that at the moment there's no reason to split into fabric and
> hadoop editions.
> 
> On Thu, Dec 1, 2016 at 4:45 AM, Denis Magda  wrote:
> 
>> Hadoop Accelerator doesn’t require any additional libraries in compare to
>> those we have in the fabric build. It only lacks some of them as Val
>> mentioned below.
>> 
>> Wouldn’t it better to discontinue Hadoop Accelerator edition and simply
>> deliver hadoop jar and its configs as a part of the fabric?
>> 
>> —
>> Denis
>> 
>>> On Nov 27, 2016, at 3:12 PM, Dmitriy Setrakyan 
>> wrote:
>>> 
>>> Separate edition for the Hadoop Accelerator was primarily driven by the
>>> default libraries. Hadoop Accelerator requires many more libraries as
>> well
>>> as configuration settings compared to the standard fabric download.
>>> 
>>> Now, as far as spark integration is concerned, I am not sure which
>> edition
>>> it belongs in, Hadoop Accelerator or standard fabric.
>>> 
>>> D.
>>> 
>>> On Sat, Nov 26, 2016 at 7:39 PM, Denis Magda  wrote:
>>> 
 *Dmitriy*,
 
 I do believe that you should know why the community decided to a
>> separate
 edition for the Hadoop Accelerator. What was the reason for that?
 Presently, as I see, it brings more confusion and difficulties rather
>> then
 benefit.
 
 —
 Denis
 
 On Nov 26, 2016, at 2:14 PM, Konstantin Boudnik  wrote:
 
 In fact I am very much agree with you. Right now, running the
>> "accelerator"
 component in Bigtop disto gives one a pretty much complete fabric
>> anyway.
 But
 in order to make just an accelerator component we perform quite a bit of
 woodoo magic during the packaging stage of the Bigtop build, shuffling
>> jars
 from here and there. And that's quite crazy, honestly ;)
 
 Cos
 
 On Mon, Nov 21, 2016 at 03:33PM, Valentin Kulichenko wrote:
 
 I tend to agree with Denis. I see only these differences between Hadoop
 Accelerator and Fabric builds (correct me if I miss something):
 
 - Limited set of available modules and no optional modules in Hadoop
 Accelerator.
 - No ignite-hadoop module in Fabric.
 - Additional scripts, configs and instructions included in Hadoop
 Accelerator.
 
 And the list of included modules frankly looks very weird. Here are only
 some of the issues I noticed:
 
 - ignite-indexing and ignite-spark are mandatory. Even if we need them
 for Hadoop Acceleration (which I doubt), are they really required or
>> can
 be
 optional?
 - We force to use ignite-log4j module without providing other logger
 options (e.g., SLF).
 - We don't include ignite-aws module. How to use Hadoop Accelerator
>> with
 S3 discovery?
 - Etc.
 
 It seems to me that if we try to fix all this issue, there will be
 virtually no difference between Fabric and Hadoop Accelerator builds
>> except
 couple of scripts and config files. If so, there is no reason to have
>> two
 builds.
 
 -Val
 
 On Mon, Nov 21, 2016 at 3:13 PM, Denis Magda  wrote:
 
 On the separate note, in the Bigtop, we start looking into changing the
 
 way we
 
 deliver Ignite and we'll likely to start offering the whole 'data
>> fabric'
 experience instead of the mere "hadoop-acceleration”.
 
 
 And you still will be using hadoop-accelerator libs of Ignite, right?
 
 I’m thinking of if there is a need to keep releasing Hadoop Accelerator
>> as
 a separate delivery.
 What if we start releasing the accelerator as a part of the standard
 fabric binary putting hadoop-accelerator libs under ‘optional’ folder?
 
 —
 Denis
 
 On Nov 21, 2016, at 12:19 PM, Konstantin Boudnik 
>> wrote:
 
 What Denis said: spark has been added to the Hadoop accelerator as a way
 
 to
 
 boost the performance of more than just MR compute of the Hadoop stack,
 
 IIRC.
 
 For what it worth, Spark is considered a part of Hadoop at large.
 
 On the separate note, in the Bigtop, we start looking into changing the
 
 way we
 
 deliver Ignite and we'll likely to start offering the whole 'data
>> fabric'
 experience instead of the mere "hadoop-acceleration".
 
 Cos
 
 On Mon, Nov 21, 2016 at 09:54AM, Denis Magda wrote:
 
 Val,
 
 Ignite Hadoop module includes not only the map-reduce accelerator but
 
 Ignite
 
 Hadoop File System component as well. The latter can be used in
 
 deployments
 
 like HDFS+IGFS+Ignite Spark + Spark.
 
 Considering this I’m for the second 

Re: Dedicated readme.io documentation for Ignite integrations

2016-11-30 Thread Sergey Kozlov
Hi

+1 for removing of schema import.

On Thu, Dec 1, 2016 at 8:51 AM, Dmitriy Setrakyan 
wrote:

> On Thu, Dec 1, 2016 at 8:10 AM, Denis Magda  wrote:
>
> > Yes, sure, the schema-import utility is still being used and maintained.
> >
>
> Why do we need it?
>
>
> >
> > I would create a separate page for Web Console for now and discontinue
> the
> > schema-import utility only when Web Console is no longer  in the beta
> state
> > https://issues.apache.org/jira/browse/IGNITE-4348 <
> > https://issues.apache.org/jira/browse/IGNITE-4348>
> >
>
> Agree that we need a page demonstrating RDBMS integration from the
> web-console.
>
> I also think that our Web Console is pretty stable right now. Definitely
> stable enough to use it for the RDBMS integration. I would discontinue the
> legacy import tool because we want to drive more users to the web-console,
> which has a lot more functionality.
>
>
> >
> > —
> > Denis
> >
> > > On Nov 30, 2016, at 8:14 PM, Dmitriy Setrakyan 
> > wrote:
> > >
> > > Do we even have this tool still? I would update the documentation to
> use
> > > the Web Console.
> > >
> > > On Thu, Dec 1, 2016 at 4:36 AM, Denis Magda  wrote:
> > >
> > >> Dmitriy,
> > >>
> > >> Agree. The documentation is already there.
> > >> https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence <
> > >> https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence>
> > >>
> > >> However, the documentation is presently based on the legacy
> > schema-import
> > >> tool. It’s reasonable to refine the documentation relying on a similar
> > Web
> > >> Console capability. The question is do we want to keep maintaining the
> > >> schema-import’s documentation at least for some time or this tool has
> > to be
> > >> fully discontinued in favor of Web Console?
> > >>
> > >> Also, Dmitriy please take a look at the following. The integrations’
> > >> documentation style is absolutely different from the ones we have for
> > other
> > >> documentations domains
> >  Dmitriy, looks like we have to upgrade readme.io subscription to
> >  “Developer Hub” for the new documentation domain because now there
> is
> > >> now
> >  way to change the appearance. Could
> >  you handle this?
> > >>
> > >>
> > >> —
> > >> Denis
> > >>
> > >>> On Nov 29, 2016, at 10:01 PM, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > >> wrote:
> > >>>
> > >>> Denis, do you think RDBMS integrations should be there as well?
> > >>>
> > >>> On Wed, Nov 30, 2016 at 4:22 AM, Denis Magda 
> > wrote:
> > >>>
> >  Igniters,
> > 
> >  Ignite can already boast about a number of integrations it has with
> a
> >  other products and technologies. Up to this point all the
> integrations
> > >> were
> >  documented on our main readme.io  making the
> > overall
> >  documentation a bit messy.
> > 
> >  I’ve decoupled the integrations from the core modules by moving them
> > to
> > >> a
> >  new documentation domain “Apache Ignite Integrations” <
> >  https://apacheignite-mix.readme.io/docs>. As a result integrations
> as
> >  Cassandra CacheStore, OSGi support, Zeppelin and bunch of the
> > streamers
> >  reside there.
> > 
> >  Dmitriy, looks like we have to upgrade readme.io subscription to
> >  “Developer Hub” for the new documentation domain because now there
> is
> > >> now
> >  way to change the appearance. Appearance themes settings are poor.
> > Could
> >  you handle this?
> > 
> >  —
> >  Denis
> > >>
> > >>
> >
> >
>



-- 
Sergey Kozlov
GridGain Systems
www.gridgain.com


Re: ignite-spark module in Hadoop Accelerator

2016-11-30 Thread Sergey Kozlov
Denis

I agree that at the moment there's no reason to split into fabric and
hadoop editions.

On Thu, Dec 1, 2016 at 4:45 AM, Denis Magda  wrote:

> Hadoop Accelerator doesn’t require any additional libraries in compare to
> those we have in the fabric build. It only lacks some of them as Val
> mentioned below.
>
> Wouldn’t it better to discontinue Hadoop Accelerator edition and simply
> deliver hadoop jar and its configs as a part of the fabric?
>
> —
> Denis
>
> > On Nov 27, 2016, at 3:12 PM, Dmitriy Setrakyan 
> wrote:
> >
> > Separate edition for the Hadoop Accelerator was primarily driven by the
> > default libraries. Hadoop Accelerator requires many more libraries as
> well
> > as configuration settings compared to the standard fabric download.
> >
> > Now, as far as spark integration is concerned, I am not sure which
> edition
> > it belongs in, Hadoop Accelerator or standard fabric.
> >
> > D.
> >
> > On Sat, Nov 26, 2016 at 7:39 PM, Denis Magda  wrote:
> >
> >> *Dmitriy*,
> >>
> >> I do believe that you should know why the community decided to a
> separate
> >> edition for the Hadoop Accelerator. What was the reason for that?
> >> Presently, as I see, it brings more confusion and difficulties rather
> then
> >> benefit.
> >>
> >> —
> >> Denis
> >>
> >> On Nov 26, 2016, at 2:14 PM, Konstantin Boudnik  wrote:
> >>
> >> In fact I am very much agree with you. Right now, running the
> "accelerator"
> >> component in Bigtop disto gives one a pretty much complete fabric
> anyway.
> >> But
> >> in order to make just an accelerator component we perform quite a bit of
> >> woodoo magic during the packaging stage of the Bigtop build, shuffling
> jars
> >> from here and there. And that's quite crazy, honestly ;)
> >>
> >> Cos
> >>
> >> On Mon, Nov 21, 2016 at 03:33PM, Valentin Kulichenko wrote:
> >>
> >> I tend to agree with Denis. I see only these differences between Hadoop
> >> Accelerator and Fabric builds (correct me if I miss something):
> >>
> >>  - Limited set of available modules and no optional modules in Hadoop
> >>  Accelerator.
> >>  - No ignite-hadoop module in Fabric.
> >>  - Additional scripts, configs and instructions included in Hadoop
> >>  Accelerator.
> >>
> >> And the list of included modules frankly looks very weird. Here are only
> >> some of the issues I noticed:
> >>
> >>  - ignite-indexing and ignite-spark are mandatory. Even if we need them
> >>  for Hadoop Acceleration (which I doubt), are they really required or
> can
> >> be
> >>  optional?
> >>  - We force to use ignite-log4j module without providing other logger
> >>  options (e.g., SLF).
> >>  - We don't include ignite-aws module. How to use Hadoop Accelerator
> with
> >>  S3 discovery?
> >>  - Etc.
> >>
> >> It seems to me that if we try to fix all this issue, there will be
> >> virtually no difference between Fabric and Hadoop Accelerator builds
> except
> >> couple of scripts and config files. If so, there is no reason to have
> two
> >> builds.
> >>
> >> -Val
> >>
> >> On Mon, Nov 21, 2016 at 3:13 PM, Denis Magda  wrote:
> >>
> >> On the separate note, in the Bigtop, we start looking into changing the
> >>
> >> way we
> >>
> >> deliver Ignite and we'll likely to start offering the whole 'data
> fabric'
> >> experience instead of the mere "hadoop-acceleration”.
> >>
> >>
> >> And you still will be using hadoop-accelerator libs of Ignite, right?
> >>
> >> I’m thinking of if there is a need to keep releasing Hadoop Accelerator
> as
> >> a separate delivery.
> >> What if we start releasing the accelerator as a part of the standard
> >> fabric binary putting hadoop-accelerator libs under ‘optional’ folder?
> >>
> >> —
> >> Denis
> >>
> >> On Nov 21, 2016, at 12:19 PM, Konstantin Boudnik 
> wrote:
> >>
> >> What Denis said: spark has been added to the Hadoop accelerator as a way
> >>
> >> to
> >>
> >> boost the performance of more than just MR compute of the Hadoop stack,
> >>
> >> IIRC.
> >>
> >> For what it worth, Spark is considered a part of Hadoop at large.
> >>
> >> On the separate note, in the Bigtop, we start looking into changing the
> >>
> >> way we
> >>
> >> deliver Ignite and we'll likely to start offering the whole 'data
> fabric'
> >> experience instead of the mere "hadoop-acceleration".
> >>
> >> Cos
> >>
> >> On Mon, Nov 21, 2016 at 09:54AM, Denis Magda wrote:
> >>
> >> Val,
> >>
> >> Ignite Hadoop module includes not only the map-reduce accelerator but
> >>
> >> Ignite
> >>
> >> Hadoop File System component as well. The latter can be used in
> >>
> >> deployments
> >>
> >> like HDFS+IGFS+Ignite Spark + Spark.
> >>
> >> Considering this I’m for the second solution proposed by you: put both
> >>
> >> 2.10
> >>
> >> and 2.11 ignite-spark modules under ‘optional’ folder of Ignite Hadoop
> >> Accelerator distribution.
> >> https://issues.apache.org/jira/browse/IGNITE-4254 <
> >>
> >> 

Re: Dedicated readme.io documentation for Ignite integrations

2016-11-30 Thread Dmitriy Setrakyan
On Thu, Dec 1, 2016 at 8:10 AM, Denis Magda  wrote:

> Yes, sure, the schema-import utility is still being used and maintained.
>

Why do we need it?


>
> I would create a separate page for Web Console for now and discontinue the
> schema-import utility only when Web Console is no longer  in the beta state
> https://issues.apache.org/jira/browse/IGNITE-4348 <
> https://issues.apache.org/jira/browse/IGNITE-4348>
>

Agree that we need a page demonstrating RDBMS integration from the
web-console.

I also think that our Web Console is pretty stable right now. Definitely
stable enough to use it for the RDBMS integration. I would discontinue the
legacy import tool because we want to drive more users to the web-console,
which has a lot more functionality.


>
> —
> Denis
>
> > On Nov 30, 2016, at 8:14 PM, Dmitriy Setrakyan 
> wrote:
> >
> > Do we even have this tool still? I would update the documentation to use
> > the Web Console.
> >
> > On Thu, Dec 1, 2016 at 4:36 AM, Denis Magda  wrote:
> >
> >> Dmitriy,
> >>
> >> Agree. The documentation is already there.
> >> https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence <
> >> https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence>
> >>
> >> However, the documentation is presently based on the legacy
> schema-import
> >> tool. It’s reasonable to refine the documentation relying on a similar
> Web
> >> Console capability. The question is do we want to keep maintaining the
> >> schema-import’s documentation at least for some time or this tool has
> to be
> >> fully discontinued in favor of Web Console?
> >>
> >> Also, Dmitriy please take a look at the following. The integrations’
> >> documentation style is absolutely different from the ones we have for
> other
> >> documentations domains
>  Dmitriy, looks like we have to upgrade readme.io subscription to
>  “Developer Hub” for the new documentation domain because now there is
> >> now
>  way to change the appearance. Could
>  you handle this?
> >>
> >>
> >> —
> >> Denis
> >>
> >>> On Nov 29, 2016, at 10:01 PM, Dmitriy Setrakyan  >
> >> wrote:
> >>>
> >>> Denis, do you think RDBMS integrations should be there as well?
> >>>
> >>> On Wed, Nov 30, 2016 at 4:22 AM, Denis Magda 
> wrote:
> >>>
>  Igniters,
> 
>  Ignite can already boast about a number of integrations it has with a
>  other products and technologies. Up to this point all the integrations
> >> were
>  documented on our main readme.io  making the
> overall
>  documentation a bit messy.
> 
>  I’ve decoupled the integrations from the core modules by moving them
> to
> >> a
>  new documentation domain “Apache Ignite Integrations” <
>  https://apacheignite-mix.readme.io/docs>. As a result integrations as
>  Cassandra CacheStore, OSGi support, Zeppelin and bunch of the
> streamers
>  reside there.
> 
>  Dmitriy, looks like we have to upgrade readme.io subscription to
>  “Developer Hub” for the new documentation domain because now there is
> >> now
>  way to change the appearance. Appearance themes settings are poor.
> Could
>  you handle this?
> 
>  —
>  Denis
> >>
> >>
>
>


Re: Dedicated readme.io documentation for Ignite integrations

2016-11-30 Thread Denis Magda
Yes, sure, the schema-import utility is still being used and maintained. 

I would create a separate page for Web Console for now and discontinue the 
schema-import utility only when Web Console is no longer  in the beta state
https://issues.apache.org/jira/browse/IGNITE-4348 


—
Denis

> On Nov 30, 2016, at 8:14 PM, Dmitriy Setrakyan  wrote:
> 
> Do we even have this tool still? I would update the documentation to use
> the Web Console.
> 
> On Thu, Dec 1, 2016 at 4:36 AM, Denis Magda  wrote:
> 
>> Dmitriy,
>> 
>> Agree. The documentation is already there.
>> https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence <
>> https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence>
>> 
>> However, the documentation is presently based on the legacy schema-import
>> tool. It’s reasonable to refine the documentation relying on a similar Web
>> Console capability. The question is do we want to keep maintaining the
>> schema-import’s documentation at least for some time or this tool has to be
>> fully discontinued in favor of Web Console?
>> 
>> Also, Dmitriy please take a look at the following. The integrations’
>> documentation style is absolutely different from the ones we have for other
>> documentations domains
 Dmitriy, looks like we have to upgrade readme.io subscription to
 “Developer Hub” for the new documentation domain because now there is
>> now
 way to change the appearance. Could
 you handle this?
>> 
>> 
>> —
>> Denis
>> 
>>> On Nov 29, 2016, at 10:01 PM, Dmitriy Setrakyan 
>> wrote:
>>> 
>>> Denis, do you think RDBMS integrations should be there as well?
>>> 
>>> On Wed, Nov 30, 2016 at 4:22 AM, Denis Magda  wrote:
>>> 
 Igniters,
 
 Ignite can already boast about a number of integrations it has with a
 other products and technologies. Up to this point all the integrations
>> were
 documented on our main readme.io  making the overall
 documentation a bit messy.
 
 I’ve decoupled the integrations from the core modules by moving them to
>> a
 new documentation domain “Apache Ignite Integrations” <
 https://apacheignite-mix.readme.io/docs>. As a result integrations as
 Cassandra CacheStore, OSGi support, Zeppelin and bunch of the streamers
 reside there.
 
 Dmitriy, looks like we have to upgrade readme.io subscription to
 “Developer Hub” for the new documentation domain because now there is
>> now
 way to change the appearance. Appearance themes settings are poor. Could
 you handle this?
 
 —
 Denis
>> 
>> 



[jira] [Created] (IGNITE-4348) Documentation: RDBMS Integration using Web Console

2016-11-30 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-4348:
---

 Summary: Documentation: RDBMS Integration using Web Console
 Key: IGNITE-4348
 URL: https://issues.apache.org/jira/browse/IGNITE-4348
 Project: Ignite
  Issue Type: Task
Reporter: Denis Magda
Assignee: Prachi Garg
 Fix For: 1.9


Ignite has a documentation which describes how to set up "automatic 
persistence" relying on the schema-import tool.
https://apacheignite-mix.readme.io/docs/automatic-persistence

Let's create a similar one but basing it on Web Console capabilities briefly 
listed on this page
https://ignite.apache.org/features/rdbmsintegration.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ignite-spark module in Hadoop Accelerator

2016-11-30 Thread Denis Magda
Hadoop Accelerator doesn’t require any additional libraries in compare to those 
we have in the fabric build. It only lacks some of them as Val mentioned below.

Wouldn’t it better to discontinue Hadoop Accelerator edition and simply deliver 
hadoop jar and its configs as a part of the fabric?

—
Denis

> On Nov 27, 2016, at 3:12 PM, Dmitriy Setrakyan  wrote:
> 
> Separate edition for the Hadoop Accelerator was primarily driven by the
> default libraries. Hadoop Accelerator requires many more libraries as well
> as configuration settings compared to the standard fabric download.
> 
> Now, as far as spark integration is concerned, I am not sure which edition
> it belongs in, Hadoop Accelerator or standard fabric.
> 
> D.
> 
> On Sat, Nov 26, 2016 at 7:39 PM, Denis Magda  wrote:
> 
>> *Dmitriy*,
>> 
>> I do believe that you should know why the community decided to a separate
>> edition for the Hadoop Accelerator. What was the reason for that?
>> Presently, as I see, it brings more confusion and difficulties rather then
>> benefit.
>> 
>> —
>> Denis
>> 
>> On Nov 26, 2016, at 2:14 PM, Konstantin Boudnik  wrote:
>> 
>> In fact I am very much agree with you. Right now, running the "accelerator"
>> component in Bigtop disto gives one a pretty much complete fabric anyway.
>> But
>> in order to make just an accelerator component we perform quite a bit of
>> woodoo magic during the packaging stage of the Bigtop build, shuffling jars
>> from here and there. And that's quite crazy, honestly ;)
>> 
>> Cos
>> 
>> On Mon, Nov 21, 2016 at 03:33PM, Valentin Kulichenko wrote:
>> 
>> I tend to agree with Denis. I see only these differences between Hadoop
>> Accelerator and Fabric builds (correct me if I miss something):
>> 
>>  - Limited set of available modules and no optional modules in Hadoop
>>  Accelerator.
>>  - No ignite-hadoop module in Fabric.
>>  - Additional scripts, configs and instructions included in Hadoop
>>  Accelerator.
>> 
>> And the list of included modules frankly looks very weird. Here are only
>> some of the issues I noticed:
>> 
>>  - ignite-indexing and ignite-spark are mandatory. Even if we need them
>>  for Hadoop Acceleration (which I doubt), are they really required or can
>> be
>>  optional?
>>  - We force to use ignite-log4j module without providing other logger
>>  options (e.g., SLF).
>>  - We don't include ignite-aws module. How to use Hadoop Accelerator with
>>  S3 discovery?
>>  - Etc.
>> 
>> It seems to me that if we try to fix all this issue, there will be
>> virtually no difference between Fabric and Hadoop Accelerator builds except
>> couple of scripts and config files. If so, there is no reason to have two
>> builds.
>> 
>> -Val
>> 
>> On Mon, Nov 21, 2016 at 3:13 PM, Denis Magda  wrote:
>> 
>> On the separate note, in the Bigtop, we start looking into changing the
>> 
>> way we
>> 
>> deliver Ignite and we'll likely to start offering the whole 'data fabric'
>> experience instead of the mere "hadoop-acceleration”.
>> 
>> 
>> And you still will be using hadoop-accelerator libs of Ignite, right?
>> 
>> I’m thinking of if there is a need to keep releasing Hadoop Accelerator as
>> a separate delivery.
>> What if we start releasing the accelerator as a part of the standard
>> fabric binary putting hadoop-accelerator libs under ‘optional’ folder?
>> 
>> —
>> Denis
>> 
>> On Nov 21, 2016, at 12:19 PM, Konstantin Boudnik  wrote:
>> 
>> What Denis said: spark has been added to the Hadoop accelerator as a way
>> 
>> to
>> 
>> boost the performance of more than just MR compute of the Hadoop stack,
>> 
>> IIRC.
>> 
>> For what it worth, Spark is considered a part of Hadoop at large.
>> 
>> On the separate note, in the Bigtop, we start looking into changing the
>> 
>> way we
>> 
>> deliver Ignite and we'll likely to start offering the whole 'data fabric'
>> experience instead of the mere "hadoop-acceleration".
>> 
>> Cos
>> 
>> On Mon, Nov 21, 2016 at 09:54AM, Denis Magda wrote:
>> 
>> Val,
>> 
>> Ignite Hadoop module includes not only the map-reduce accelerator but
>> 
>> Ignite
>> 
>> Hadoop File System component as well. The latter can be used in
>> 
>> deployments
>> 
>> like HDFS+IGFS+Ignite Spark + Spark.
>> 
>> Considering this I’m for the second solution proposed by you: put both
>> 
>> 2.10
>> 
>> and 2.11 ignite-spark modules under ‘optional’ folder of Ignite Hadoop
>> Accelerator distribution.
>> https://issues.apache.org/jira/browse/IGNITE-4254 <
>> 
>> https://issues.apache.org/jira/browse/IGNITE-4254>
>> 
>> 
>> BTW, this task may be affected or related to the following ones:
>> https://issues.apache.org/jira/browse/IGNITE-3596 <
>> 
>> https://issues.apache.org/jira/browse/IGNITE-3596>
>> 
>> https://issues.apache.org/jira/browse/IGNITE-3822
>> 
>> —
>> Denis
>> 
>> On Nov 19, 2016, at 1:26 PM, Valentin Kulichenko <
>> 
>> valentin.kuliche...@gmail.com> wrote:
>> 
>> 
>> Hadoop 

Re: Dedicated readme.io documentation for Ignite integrations

2016-11-30 Thread Denis Magda
Dmitriy,

Agree. The documentation is already there.
https://apacheignite-mix.readme.io/v1.7/docs/automatic-persistence 


However, the documentation is presently based on the legacy schema-import tool. 
It’s reasonable to refine the documentation relying on a similar Web Console 
capability. The question is do we want to keep maintaining the schema-import’s 
documentation at least for some time or this tool has to be fully discontinued 
in favor of Web Console?

Also, Dmitriy please take a look at the following. The integrations’ 
documentation style is absolutely different from the ones we have for other 
documentations domains
>> Dmitriy, looks like we have to upgrade readme.io subscription to
>> “Developer Hub” for the new documentation domain because now there is now
>> way to change the appearance. Could
>> you handle this?


—
Denis

> On Nov 29, 2016, at 10:01 PM, Dmitriy Setrakyan  wrote:
> 
> Denis, do you think RDBMS integrations should be there as well?
> 
> On Wed, Nov 30, 2016 at 4:22 AM, Denis Magda  wrote:
> 
>> Igniters,
>> 
>> Ignite can already boast about a number of integrations it has with a
>> other products and technologies. Up to this point all the integrations were
>> documented on our main readme.io  making the overall
>> documentation a bit messy.
>> 
>> I’ve decoupled the integrations from the core modules by moving them to a
>> new documentation domain “Apache Ignite Integrations” <
>> https://apacheignite-mix.readme.io/docs>. As a result integrations as
>> Cassandra CacheStore, OSGi support, Zeppelin and bunch of the streamers
>> reside there.
>> 
>> Dmitriy, looks like we have to upgrade readme.io subscription to
>> “Developer Hub” for the new documentation domain because now there is now
>> way to change the appearance. Appearance themes settings are poor. Could
>> you handle this?
>> 
>> —
>> Denis



Re: Talking to Ignite From PHP

2016-11-30 Thread Denis Magda
Igor,

Finally, I could set up the PHP PDO environment and execute the queries, 
provided in the guidance [1]. 

However, I’ve faced with a couple of the issue that should be addressed as a 
part of 1.8
https://issues.apache.org/jira/browse/IGNITE-4347 

https://issues.apache.org/jira/browse/IGNITE-4346 


Regardless of that, I’ve improved the documentation and adjusted the code 
snippets in such a way that everything works fine even with those two bugs. 

I think we’re done with this documentation. Thanks you a lot for its 
preparation!

[1] https://apacheignite-mix.readme.io/v1.7/docs/php-pdo 


—
Denis

> On Nov 28, 2016, at 10:29 AM, Igor Sapego  wrote:
> 
> Denis,
> 
> yes.
> 
> Best Regards,
> Igor
> 
> On Mon, Nov 28, 2016 at 8:26 PM, Denis Magda  > wrote:
> 
>> Igor,
>> 
>> Has everything been merged into 1.8? Can I start checking PHP/PDO guide?
>> 
>> —
>> Denis
>> 
>>> On Nov 11, 2016, at 3:52 AM, Igor Sapego  wrote:
>>> 
>>> Denis,
>>> 
>>> I'm ready to merge it as soon as the 1.8 branch is ready. Once it ready
>> I'm
>>> and DML is merged to it, I'm going to merge 1.8 to my branch. After that
>> I
>>> believe there is going to be some kind of testing from my side and review
>>> from the community side and then I'll merge it to 1.8 once my
>> contribution
>>> is accepted. I believe everything from my side is not going to take more
>> than
>>> few hours, so it all depends on when 1.8 branch is ready and how long
>> review
>>> is going to take.
>>> 
>>> 
>>> Best Regards,
>>> Igor
>>> 
>>> On Thu, Nov 10, 2016 at 11:06 PM, Denis Magda > >> wrote:
>>> Igor,
>>> 
>>> I didn’t manage to merge the branches. There were conflicts that I
>> couldn’t resolve properly on my side.
>>> 
>>> Please let me know once everything gets merged into 1.8 branch so that I
>> can go through the guide from the beginning till the end. Do you think it
>> can be done by next Monday?
>>> 
>>> In the meanwhile, I do like the documentation in its present state and
>> left a couple of minor comments in the related ticket
>>> https://issues.apache.org/jira/browse/IGNITE-3921 
>>>  <
>> https://issues.apache.org/jira/browse/IGNITE-3921 
>> >
>>> 
>>> —
>>> Denis
>>> 
 On Nov 3, 2016, at 5:46 AM, Igor Sapego  > isap...@gridgain.com >> wrote:
 
 Denis,
 
 It seems like someone have deleted branch ignite-3910 or maybe I just
>> forgot to push it
 to remote repository. However, I've pushed it now. Check it please.
 
 Thanks for your contribution to the documentation and pointing out what
>> can be further
 improved. I'm already working on it.
 
 Best Regards,
 Igor
 
 On Thu, Nov 3, 2016 at 1:51 AM, Denis Magda 
>> >> wrote:
 Igor,
 
 I can’t find branch ignite-3910. Most likely you keep the changes on
>> the other one. Please double check.
 
 In any case, I succeeded with some of the steps from the documentation
>> and updated it making clearer.
 http://apacheignite.gridgain.org/v1.7/docs/pdo-interoperability 
  <
>> http://apacheignite.gridgain.org/v1.7/docs/pdo-interoperability 
>> >
 
 Please apply my latest notes related to the documentation.
 https://issues.apache.org/jira/browse/IGNITE-3921 
  <
>> https://issues.apache.org/jira/browse/IGNITE-3921 
>> >
 
 Let me know as soon as the documentation is refined. I’ll keep at the
>> installation and testing of PHP + Ignite scenario.
 
 —
 Denis
 
> On Nov 2, 2016, at 4:38 AM, Igor Sapego   > isap...@gridgain.com >> wrote:
> 
> Denis,
> 
> I believe that you should switch to DML branch (ignite-2294),
> and then merge ignite-3910 into it.
> 
> That should be enough. Please let me know if there are any issues.
> 
> Best Regards,
> Igor
> 
> On Wed, Nov 2, 2016 at 3:12 AM, Denis Magda  
>> >> wrote:
> Igor,
> 
> I’m planning to prepare a blog post about our PHP + PDO 

Re: Apache Ignite 1.8 Release

2016-11-30 Thread Denis Magda
I’ve faced with two more issues while was working with Ignite from PHP PDO side.

The first one is related to DML.
DML and PHP PDO: double field is converted to bigdecimal 


Alexander P. and Igor S. please pull together and fix this issue before 1.8 is 
sent for vote.

The second issue is related to ODBC and affects the usability only. Igor S., 
please try to address it as a part of 1.8 as well.
ODBC: NPE when cache name is different from the one configured in DSN 


—
Denis

> On Nov 30, 2016, at 12:01 PM, Sergey Kozlov  wrote:
> 
> Hi
> 
> Update for DML testing:
> 
> IGNITE-4342 DML: clear() on atomic cache causes the exception from previous
> failed DML statement 
> 
> 
> 
> On Wed, Nov 30, 2016 at 10:18 PM, Denis Magda  wrote:
> 
>> Vladimir,
>> 
>> Please add to the release notes information regarding this feature
>> https://issues.apache.org/jira/browse/IGNITE-2310 <
>> https://issues.apache.org/jira/browse/IGNITE-2310>
>> 
>> Essentially, it allows locking a partition while a remote computation is
>> being executed on a node. I’ve already documented the feature here.
>> http://apacheignite.gridgain.org/v1.7/docs/collocate-
>> compute-and-data#affinity-call-and-run-methods <
>> http://apacheignite.gridgain.org/v1.7/docs/collocate-
>> compute-and-data#affinity-call-and-run-methods>
>> 
>> —
>> Denis
>> 
>>> On Nov 25, 2016, at 4:08 AM, Vladimir Ozerov 
>> wrote:
>>> 
>>> Folks,
>>> 
>>> I need to create RELEASE NOTES. Please advise which notable tickets you
>>> completed as a part of 1.8 release.
>>> 
>>> Vladimir.
>>> 
>>> On Fri, Nov 25, 2016 at 2:58 PM, Sergey Kozlov 
>> wrote:
>>> 
 Hi
 
 Could someone explain why Cassandra module splitted into three parts in
 optional directory for binary fabric build? At the moment I see
>> following
 unclear points
 1. ignite-cassandra directory contains README.txt only
 2. Does ignite-cassandra-serializers depend on ignite-cassandra-store?
>> In
 that case why do not make one module?
 
 
 
 On Fri, Nov 25, 2016 at 2:37 PM, Alexander Paschenko <
 alexander.a.pasche...@gmail.com> wrote:
 
> IGNITE-4303 is most likely fixed by IGNITE-4280 fix (already merged in
> 1.8 branch).
> 
> Meanwhile everything SQL/DML related seems to be in pull
> requests/reviewed/fixed/merged (no issues unapproached/not fixed).
> 
> - Alex
> 
> 
> 
> 2016-11-24 22:59 GMT+03:00 Sergey Kozlov :
>> Hi
>> 
>> I found two issues for 1.8:
>> IGNITE-4304 ClusterTopologyCheckedException: Failed to send message
> because
>> node left grid 
>> IGNITE-4303 CacheClientBinaryQueryExample returns wrong result
>> 
>> 
>> Could someone experienced take a look?
>> 
>> On Thu, Nov 24, 2016 at 12:12 PM, Vladimir Ozerov <
 voze...@gridgain.com>
>> wrote:
>> 
>>> Folks,
>>> 
>>> DML is merged to ignite-1.8, but according to JIRA reports several
> problems
>>> with it were revealed. I propose to focus on DML finalization in
> ignite-1.8
>>> branch, and minimize other merges to it, targeting them for the next
>>> release (1.9, 2.0?).
>>> 
>>> Any objections?
>>> 
>>> Vladimir.
>>> 
>>> On Wed, Nov 23, 2016 at 7:25 PM, Igor Sapego 
> wrote:
>>> 
 Denis,
 
 I've raised PRs and Vladimir has merged them into ignite-1.8. But
 now
> we
 have some
 failing ODBC tests in the branch. I'm currently working on them.
 There
>>> is a
 ticket for
 that which you can track [1]. I'll add all my findings there.
 
 [1] https://issues.apache.org/jira/browse/IGNITE-4288
 
 Best Regards,
 Igor
 
 On Wed, Nov 23, 2016 at 7:08 PM, Denis Magda 
> wrote:
 
> Alexander,
> 
> Awesome news, thanks for making this happen!
> 
> Igor S., have you merged all ODBC-DML-PHP/PDO related changes?
 Can I
 start
> testing that PHP-PDO guidance [1] is correct?
> 
> [1] https://apacheignite.readme.io/docs/pdo-interoperability <
> https://apacheignite.readme.io/docs/pdo-interoperability>
> 
> —
> Denis
> 
>> On Nov 23, 2016, at 12:01 AM, Alexander Paschenko <
> alexander.a.pasche...@gmail.com> wrote:
>> 
>> Folks,
>> 
>> Yesterday it'd been agreed with Sergi that DML branch is now
 good
> to
>> be included into 1.8 

[jira] [Created] (IGNITE-4347) ODBC: NPE when cache name is different from the one configured in DSN

2016-11-30 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-4347:
---

 Summary: ODBC: NPE when cache name is different from the one 
configured in DSN
 Key: IGNITE-4347
 URL: https://issues.apache.org/jira/browse/IGNITE-4347
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Reporter: Denis Magda
Assignee: Igor Sapego
Priority: Critical
 Attachments: exception.png

The following query was executed from PHP PDO side

{code}
$dbs = $dbh->prepare('INSERT INTO Person (_key, firstName, lastName, resume, 
salary) 
VALUES (?, ?, ?, ?, ?)');
{code}

The cache name in Spring XML configuration was "Person" while the DSN was 
configured to use "PersonCache" as a default cache name.

As a result, I was getting NPE shown in the attached screenshot. Only after I 
sorted out the root cause of the NPE referring to the source code I could 
finally fix the issue.

Let's provide more meaningful explanation rather than throwing NPE saying 
something like "The cache named {cache_name} has not been found. Make sure that 
ODBC connection string or DSN is configured properly."





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4346) DML and PHP PDO: double field is converted to bigdecimal

2016-11-30 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-4346:
---

 Summary: DML and PHP PDO: double field is converted to bigdecimal
 Key: IGNITE-4346
 URL: https://issues.apache.org/jira/browse/IGNITE-4346
 Project: Ignite
  Issue Type: Bug
Reporter: Denis Magda
Assignee: Alexander Paschenko
Priority: Blocker


I've set up PHP PDO and ODBC environment according to the following 
documentation
https://apacheignite-mix.readme.io/docs/php-pdo

In particular:
- The ODBC driver was built from the latest 1.8 sources located in ignite-1.8 
branch. 
- The DSN configuration is shown in the attached screenshot name 
"dsn_configuration"
- Ignite's cluster configuration is attached as well.

To reproduce the issue do the following:
- start a node using attached default-config.xml
- execute insert.php. There won't be any error.
- execute update.php and you'll get the error with the stack trace below

{code}
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to execute DML
 statement [qry=UPDATE Person SET salary = 42000.0 WHERE salary > 5.0, param
s=[]]
at org.apache.ignite.internal.processors.query.GridQueryProcessor.execut
eQuery(GridQueryProcessor.java:1800)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryT
woStep(GridQueryProcessor.java:810)
... 13 more
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException:
 Failed to execute DML statement [qry=UPDATE Person SET salary = 42000.0 WHERE s
alary > 5.0, params=[]]
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.query
TwoStep(IgniteH2Indexing.java:1270)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.appl
yx(GridQueryProcessor.java:812)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.appl
yx(GridQueryProcessor.java:810)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOu
tClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.execut
eQuery(GridQueryProcessor.java:1777)
... 14 more
Caused by: class org.apache.ignite.internal.transactions.IgniteTxHeuristicChecke
dException: Failed to locally write to cache (all transaction entries will be in
validated, however there was a window when entries for this transaction were vis
ible to others): GridNearTxLocal [mappings=IgniteTxMappingsSingleImpl [mapping=G
ridDistributedTxMapping [entries=[IgniteTxEntry [key=KeyCacheObjectImpl [val=777
, hasValBytes=false], cacheId=1215863053, partId=-1, txKey=IgniteTxKey [key=KeyC
acheObjectImpl [val=777, hasValBytes=false], cacheId=1215863053], val=[op=TRANSF
ORM, val=null], prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null], entryPr
ocessorsCol=[IgniteBiTuple [val1=org.apache.ignite.internal.processors.query.h2.
DmlStatementsProcessor$ModifyingEntryProcessor@1f2f10da, val2=[Ljava.lang.Object
;@2bfa5294]], ttl=-1, conflictExpireTime=-1, conflictVer=null, explicitVer=null,
 dhtVer=GridCacheVersion [topVer=92027384, time=1480547401695, order=14805473813
93, nodeOrder=1], filters=[], filtersPassed=false, filtersSet=true, entry=GridDh
tColocatedCacheEntry [super=GridDhtCacheEntry [rdrs=[], locPart=GridDhtLocalPart
ition [id=777, map=org.apache.ignite.internal.processors.cache.GridCacheConcurre
ntMapImpl@24c94530, rmvQueue=GridCircularBuffer [sizeMask=255, idxGen=0], cntr=1
, shouldBeRenting=false, state=OWNING, reservations=0, empty=false, createTime=1
1/30/2016 15:09:45], super=GridDistributedCacheEntry [super=GridCacheMapEntry [k
ey=KeyCacheObjectImpl [val=777, hasValBytes=false], val=Person [idHash=659295625
, hash=-322179972, resume=Secret Service agent, firstName=James, lastName=Bond,
salary=65000.0], startVer=1480547381389, ver=GridCacheVersion [topVer=92027384,
time=1480547394518, order=1480547381390, nodeOrder=1], hash=777, extras=GridCach
eMvccEntryExtras [mvcc=GridCacheMvcc [locs=[GridCacheMvccCandidate [nodeId=08b20
b4b-3ca6-4c05-aa9d-4219cbc3f3f5, ver=GridCacheVersion [topVer=92027384, time=148
0547401693, order=1480547381392, nodeOrder=1], timeout=0, ts=1480547401693, thre
adId=57, id=3, topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], reentry
=null, otherNodeId=08b20b4b-3ca6-4c05-aa9d-4219cbc3f3f5, otherVer=GridCacheVersi
on [topVer=92027384, time=1480547401693, order=1480547381392, nodeOrder=1], mapp
edDhtNodes=null, mappedNearNodes=null, ownerVer=null, serOrder=null, key=KeyCach
eObjectImpl [val=777, hasValBytes=false], masks=local=1|owner=1|ready=1|reentry=
0|used=0|tx=1|single_implicit=1|dht_local=1|near_local=0|removed=0, prevVer=null
, nextVer=null]], rmts=null]], flags=0, prepared=1, locked=false, nodeId=08b
20b4b-3ca6-4c05-aa9d-4219cbc3f3f5, locMapped=false, expiryPlc=null, transferExpi
ryPlc=false, flags=6, partUpdateCntr=0, serReadVer=null, xidVer=GridCacheVersion
 [topVer=92027384, time=1480547401693, 

Re: Apache Ignite 1.8 Release

2016-11-30 Thread Sergey Kozlov
Hi

Update for DML testing:

IGNITE-4342 DML: clear() on atomic cache causes the exception from previous
failed DML statement 



On Wed, Nov 30, 2016 at 10:18 PM, Denis Magda  wrote:

> Vladimir,
>
> Please add to the release notes information regarding this feature
> https://issues.apache.org/jira/browse/IGNITE-2310 <
> https://issues.apache.org/jira/browse/IGNITE-2310>
>
> Essentially, it allows locking a partition while a remote computation is
> being executed on a node. I’ve already documented the feature here.
> http://apacheignite.gridgain.org/v1.7/docs/collocate-
> compute-and-data#affinity-call-and-run-methods <
> http://apacheignite.gridgain.org/v1.7/docs/collocate-
> compute-and-data#affinity-call-and-run-methods>
>
> —
> Denis
>
> > On Nov 25, 2016, at 4:08 AM, Vladimir Ozerov 
> wrote:
> >
> > Folks,
> >
> > I need to create RELEASE NOTES. Please advise which notable tickets you
> > completed as a part of 1.8 release.
> >
> > Vladimir.
> >
> > On Fri, Nov 25, 2016 at 2:58 PM, Sergey Kozlov 
> wrote:
> >
> >> Hi
> >>
> >> Could someone explain why Cassandra module splitted into three parts in
> >> optional directory for binary fabric build? At the moment I see
> following
> >> unclear points
> >> 1. ignite-cassandra directory contains README.txt only
> >> 2. Does ignite-cassandra-serializers depend on ignite-cassandra-store?
> In
> >> that case why do not make one module?
> >>
> >>
> >>
> >> On Fri, Nov 25, 2016 at 2:37 PM, Alexander Paschenko <
> >> alexander.a.pasche...@gmail.com> wrote:
> >>
> >>> IGNITE-4303 is most likely fixed by IGNITE-4280 fix (already merged in
> >>> 1.8 branch).
> >>>
> >>> Meanwhile everything SQL/DML related seems to be in pull
> >>> requests/reviewed/fixed/merged (no issues unapproached/not fixed).
> >>>
> >>> - Alex
> >>>
> >>>
> >>>
> >>> 2016-11-24 22:59 GMT+03:00 Sergey Kozlov :
>  Hi
> 
>  I found two issues for 1.8:
>  IGNITE-4304 ClusterTopologyCheckedException: Failed to send message
> >>> because
>  node left grid 
>  IGNITE-4303 CacheClientBinaryQueryExample returns wrong result
>  
> 
>  Could someone experienced take a look?
> 
>  On Thu, Nov 24, 2016 at 12:12 PM, Vladimir Ozerov <
> >> voze...@gridgain.com>
>  wrote:
> 
> > Folks,
> >
> > DML is merged to ignite-1.8, but according to JIRA reports several
> >>> problems
> > with it were revealed. I propose to focus on DML finalization in
> >>> ignite-1.8
> > branch, and minimize other merges to it, targeting them for the next
> > release (1.9, 2.0?).
> >
> > Any objections?
> >
> > Vladimir.
> >
> > On Wed, Nov 23, 2016 at 7:25 PM, Igor Sapego 
> >>> wrote:
> >
> >> Denis,
> >>
> >> I've raised PRs and Vladimir has merged them into ignite-1.8. But
> >> now
> >>> we
> >> have some
> >> failing ODBC tests in the branch. I'm currently working on them.
> >> There
> > is a
> >> ticket for
> >> that which you can track [1]. I'll add all my findings there.
> >>
> >> [1] https://issues.apache.org/jira/browse/IGNITE-4288
> >>
> >> Best Regards,
> >> Igor
> >>
> >> On Wed, Nov 23, 2016 at 7:08 PM, Denis Magda 
> >>> wrote:
> >>
> >>> Alexander,
> >>>
> >>> Awesome news, thanks for making this happen!
> >>>
> >>> Igor S., have you merged all ODBC-DML-PHP/PDO related changes?
> >> Can I
> >> start
> >>> testing that PHP-PDO guidance [1] is correct?
> >>>
> >>> [1] https://apacheignite.readme.io/docs/pdo-interoperability <
> >>> https://apacheignite.readme.io/docs/pdo-interoperability>
> >>>
> >>> —
> >>> Denis
> >>>
>  On Nov 23, 2016, at 12:01 AM, Alexander Paschenko <
> >>> alexander.a.pasche...@gmail.com> wrote:
> 
>  Folks,
> 
>  Yesterday it'd been agreed with Sergi that DML branch is now
> >> good
> >>> to
>  be included into 1.8 branch that is to be created.
> 
>  Minor review fixes, should they be made, and test fixes will be
>  incorporated into 1.8 as separate patches later.
> 
>  Also, it'd been agreed that, in order to deliver these new
> >>> features
> > on
>  time, two subtasks will be fixed in later releases (shortly):
> 
>  https://issues.apache.org/jira/browse/IGNITE-4268
>  https://issues.apache.org/jira/browse/IGNITE-4269
> 
>  On failing tests:
>  https://issues.apache.org/jira/browse/IGNITE-2294?
> >>> focusedCommentId=15683377
> 
>  - Alex
> 
>  2016-11-22 11:29 GMT+03:00 Alexander Paschenko
>  

Re: Cassandra basic setup

2016-11-30 Thread Igor Rudyak
Regarding *cassandra-bootstrap.sh *and* ignite-bootstrap.sh* current
documentation is up to date. These scripts are not automatically updated
during release packaging process, according to the documentation (item
5 of *Configuration
Details* section,
https://apacheignite-mix.readme.io/docs/aws-infrastructure-deployment#section-configuration-details)
they should be updated manually.

Igor

On Wed, Nov 30, 2016 at 9:34 AM, Denis Magda  wrote:

> Cross-posting to the dev list.
>
> Igor R., is this something that has to be fixed for 1.8 as well? The
> community is planning to send the new release for vote by the end of
> Friday, could you look into this issue by that time?
>
> —
> Denis
>
> On Nov 30, 2016, at 3:00 AM, Riccardo Iacomini <
> riccardo.iacom...@rdslab.com> wrote:
>
> Hi Igor,
> I would like to highlight to you another discrepancy I noticed while
> trying to build the cassandra/ignite/ganglia cluster test in AWS using the
> provided framework. I've build Ignite 1.7 from source, but the version
> hard-coded for the modules in the *cassandra-bootstrap.sh *and*
> ignite-bootstrap.sh* script is the *1.6.*
>
> Best Regards
>
> Riccardo Iacomini
>
>
> *RDSLab*
>
> On Tue, Nov 29, 2016 at 11:18 PM, Igor Rudyak  wrote:
>
>> Ok, thanks for the info.
>>
>> Igor
>>
>> On Tue, Nov 29, 2016 at 12:56 PM, Denis Magda  wrote:
>>
>>> Igor,
>>>
>>> The documentation has been moved to the new integrations related site
>>> https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra
>>>
>>> You’re free to update it there.
>>>
>>> —
>>> Denis
>>>
>>> On Nov 29, 2016, at 11:49 AM, Denis Magda  wrote:
>>>
>>> Igor,
>>>
>>> Please hold on for a while. I’ve just started moving all the
>>> integrations related documentation to a new domain. I’ll let you know when
>>> it’s safe to update the doc.
>>>
>>> —
>>> Denis
>>>
>>> On Nov 29, 2016, at 10:46 AM, Igor Rudyak  wrote:
>>>
>>> Hi Riccardo,
>>>
>>> Thanks for noticing this. There were number of refactorings done and it
>>> looks like we need to update the documentation.
>>>
>>> I'll update the documentation for the module.
>>>
>>> Igor
>>>
>>> On Tue, Nov 29, 2016 at 9:26 AM, Denis Magda  wrote:
>>>
 Igor,

 Would you mind looking through the documentation and updating it
 whenever is needed?

 —
 Denis

 On Nov 29, 2016, at 1:58 AM, Riccardo Iacomini <
 riccardo.iacom...@rdslab.com> wrote:

 Hi Igor,
 I finally discovered what was causing the issue. The example provided
  in the Ignite
 documentation page has a subtle difference in the package structure of
 several classes, for example:

 org.apache.ignite.cache.store.cassandra.datasource.DataSource (the
 class in the ignite-cassandra.jar) vs org.apache.ignite.cache.sto
 re.cassandra.*utils*.datasource.DataSource (in the example). The same
 applies also for other classes in the example. I do not know if this is due
 to a refactoring performed on the code which did not propagates through the
 examples, or simply I used a different version of the documentation in
 contrast with the one I was running.

 Anyway, thank you for your time.

 Best regards

 Riccardo Iacomini


 *RDSLab*

 On Mon, Nov 28, 2016 at 5:34 PM, Igor Rudyak  wrote:

> If you are using ignite.sh it should be fine.
>
> Igor
>
> On Nov 28, 2016 8:29 AM, "Riccardo Iacomini" <
> riccardo.iacom...@rdslab.com> wrote:
>
>> I will try. Yes I am using the ignite.sh command. Any drawbacks?
>>
>> Il 28 nov 2016 5:26 PM, "Igor Rudyak"  ha scritto:
>>
>> Try to check your classpath. Find ignite process usig something like
>> ps -es | grep  Ignite and check java command used to launch ignite.
>>
>> By the way, how you launching Ignite? Do you use ignite.sh script for
>> this?
>>
>> Igor
>>
>> On Nov 28, 2016 8:05 AM, "Riccardo Iacomini" <
>> riccardo.iacom...@rdslab.com> wrote:
>>
>>> Hi Igor,
>>> I tried your suggestion, but it does find the class. I've also read
>>> from the README files that optional modules must be copied into the libs
>>> folder: same outcome. I've tried also the docker image, adding my
>>> configuration xml files to it, and running the image as specified
>>> here 
>>> in the documentation. Do you have any other suggestion?
>>>
>>> Thanks for your patience.
>>>
>>> Best regards
>>>
>>> Riccardo Iacomini
>>>
>>>
>>> *RDSLab*
>>>
>>> On Sun, Nov 27, 2016 at 8:46 PM, Igor Rudyak 
>>> wrote:
>>>
 Try to include required jars into IGNITE_LIBS environment 

Locking of partition with affinityRun/affinityCall

2016-11-30 Thread Denis Magda
Taras,

There is a question in regards to the feature contributed by you recently
https://issues.apache.org/jira/browse/IGNITE-2310 


The Java API doc says that the partition will not be migrated while a job is 
being executed on a target node.

Does it mean that
- the rebalancing will be postponed in general or 
- the rebalancing of the partition will be started, moving its content to a new 
primary, but the partition will not be evicted from the target node while the 
job is running

In my understanding the second point is correct and I’ve documented the feature 
saying that the partition is not evicted
http://apacheignite.gridgain.org/v1.7/docs/collocate-compute-and-data#affinity-call-and-run-methods
 


Please clarify what’s true and update Java API doc if my current understanding 
is correct.

—
Denis

Re: Apache Ignite 1.8 Release

2016-11-30 Thread Denis Magda
Vladimir,

Please add to the release notes information regarding this feature
https://issues.apache.org/jira/browse/IGNITE-2310 


Essentially, it allows locking a partition while a remote computation is being 
executed on a node. I’ve already documented the feature here.
http://apacheignite.gridgain.org/v1.7/docs/collocate-compute-and-data#affinity-call-and-run-methods
 


—
Denis

> On Nov 25, 2016, at 4:08 AM, Vladimir Ozerov  wrote:
> 
> Folks,
> 
> I need to create RELEASE NOTES. Please advise which notable tickets you
> completed as a part of 1.8 release.
> 
> Vladimir.
> 
> On Fri, Nov 25, 2016 at 2:58 PM, Sergey Kozlov  wrote:
> 
>> Hi
>> 
>> Could someone explain why Cassandra module splitted into three parts in
>> optional directory for binary fabric build? At the moment I see following
>> unclear points
>> 1. ignite-cassandra directory contains README.txt only
>> 2. Does ignite-cassandra-serializers depend on ignite-cassandra-store? In
>> that case why do not make one module?
>> 
>> 
>> 
>> On Fri, Nov 25, 2016 at 2:37 PM, Alexander Paschenko <
>> alexander.a.pasche...@gmail.com> wrote:
>> 
>>> IGNITE-4303 is most likely fixed by IGNITE-4280 fix (already merged in
>>> 1.8 branch).
>>> 
>>> Meanwhile everything SQL/DML related seems to be in pull
>>> requests/reviewed/fixed/merged (no issues unapproached/not fixed).
>>> 
>>> - Alex
>>> 
>>> 
>>> 
>>> 2016-11-24 22:59 GMT+03:00 Sergey Kozlov :
 Hi
 
 I found two issues for 1.8:
 IGNITE-4304 ClusterTopologyCheckedException: Failed to send message
>>> because
 node left grid 
 IGNITE-4303 CacheClientBinaryQueryExample returns wrong result
 
 
 Could someone experienced take a look?
 
 On Thu, Nov 24, 2016 at 12:12 PM, Vladimir Ozerov <
>> voze...@gridgain.com>
 wrote:
 
> Folks,
> 
> DML is merged to ignite-1.8, but according to JIRA reports several
>>> problems
> with it were revealed. I propose to focus on DML finalization in
>>> ignite-1.8
> branch, and minimize other merges to it, targeting them for the next
> release (1.9, 2.0?).
> 
> Any objections?
> 
> Vladimir.
> 
> On Wed, Nov 23, 2016 at 7:25 PM, Igor Sapego 
>>> wrote:
> 
>> Denis,
>> 
>> I've raised PRs and Vladimir has merged them into ignite-1.8. But
>> now
>>> we
>> have some
>> failing ODBC tests in the branch. I'm currently working on them.
>> There
> is a
>> ticket for
>> that which you can track [1]. I'll add all my findings there.
>> 
>> [1] https://issues.apache.org/jira/browse/IGNITE-4288
>> 
>> Best Regards,
>> Igor
>> 
>> On Wed, Nov 23, 2016 at 7:08 PM, Denis Magda 
>>> wrote:
>> 
>>> Alexander,
>>> 
>>> Awesome news, thanks for making this happen!
>>> 
>>> Igor S., have you merged all ODBC-DML-PHP/PDO related changes?
>> Can I
>> start
>>> testing that PHP-PDO guidance [1] is correct?
>>> 
>>> [1] https://apacheignite.readme.io/docs/pdo-interoperability <
>>> https://apacheignite.readme.io/docs/pdo-interoperability>
>>> 
>>> —
>>> Denis
>>> 
 On Nov 23, 2016, at 12:01 AM, Alexander Paschenko <
>>> alexander.a.pasche...@gmail.com> wrote:
 
 Folks,
 
 Yesterday it'd been agreed with Sergi that DML branch is now
>> good
>>> to
 be included into 1.8 branch that is to be created.
 
 Minor review fixes, should they be made, and test fixes will be
 incorporated into 1.8 as separate patches later.
 
 Also, it'd been agreed that, in order to deliver these new
>>> features
> on
 time, two subtasks will be fixed in later releases (shortly):
 
 https://issues.apache.org/jira/browse/IGNITE-4268
 https://issues.apache.org/jira/browse/IGNITE-4269
 
 On failing tests:
 https://issues.apache.org/jira/browse/IGNITE-2294?
>>> focusedCommentId=15683377
 
 - Alex
 
 2016-11-22 11:29 GMT+03:00 Alexander Paschenko
 :
> Vlad,
> 
> Most likely today.
> 
> - Alex
> 
> 2016-11-22 11:25 GMT+03:00 Vladimir Ozerov <
>> voze...@gridgain.com
 :
>> Igniters,
>> 
>> I went through remaining tickets assigned to 1.8 and it seems
>>> that
> we
>>> are
>> pretty close to release. As far as I understand the biggest
> remaining
>> feature is DML [1]. I think we can create separate branch for
>>> 1.8

Fwd: [jira] [Created] (IGNITE-4345) incorrect/outdated info on site

2016-11-30 Thread Denis Magda
Nick, please handle the rest of Docker related issues referring to the 
discussion in the ticket.

—
Denis

> Begin forwarded message:
> 
> From: "Sergey Korzhevsky (JIRA)" 
> Subject: [jira] [Created] (IGNITE-4345) incorrect/outdated info on site
> Date: November 30, 2016 at 9:56:59 AM PST
> To: dev@ignite.apache.org
> Reply-To: dev@ignite.apache.org
> 
> Sergey Korzhevsky created IGNITE-4345:
> -
> 
> Summary: incorrect/outdated info on site
> Key: IGNITE-4345
> URL: https://issues.apache.org/jira/browse/IGNITE-4345
> Project: Ignite
>  Issue Type: Bug
>  Components: documentation
>Reporter: Sergey Korzhevsky
>Priority: Trivial
> 
> 
> Old/icorrect info on the site.
> 
> 1) http://ignite.apache.org/download.cgi#docker
>   a) "guide" link points to 1.5 version
>   b) "docker repository" is a broken link, should be 
> https://hub.docker.com/u/apacheignite/
> i guess.
> 
> 
> 2) https://apacheignite.readme.io/v1.7/docs/docker-deployment
> IGNITE_CONFIG "example url"'s file (https://raw.githubusercontent.com/
> bob/master/ignite-cfg.xml) does not exists.
> I think, it may be pointed to the some file in the current github repo. 
> 
> 
> 
> 
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)



[jira] [Created] (IGNITE-4345) incorrect/outdated info on site

2016-11-30 Thread Sergey Korzhevsky (JIRA)
Sergey Korzhevsky created IGNITE-4345:
-

 Summary: incorrect/outdated info on site
 Key: IGNITE-4345
 URL: https://issues.apache.org/jira/browse/IGNITE-4345
 Project: Ignite
  Issue Type: Bug
  Components: documentation
Reporter: Sergey Korzhevsky
Priority: Trivial


Old/icorrect info on the site.

1) http://ignite.apache.org/download.cgi#docker
   a) "guide" link points to 1.5 version
   b) "docker repository" is a broken link, should be 
https://hub.docker.com/u/apacheignite/
i guess.


2) https://apacheignite.readme.io/v1.7/docs/docker-deployment
IGNITE_CONFIG "example url"'s file (https://raw.githubusercontent.com/
bob/master/ignite-cfg.xml) does not exists.
I think, it may be pointed to the some file in the current github repo. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Cassandra basic setup

2016-11-30 Thread Denis Magda
Cross-posting to the dev list.

Igor R., is this something that has to be fixed for 1.8 as well? The community 
is planning to send the new release for vote by the end of Friday, could you 
look into this issue by that time?

—
Denis

> On Nov 30, 2016, at 3:00 AM, Riccardo Iacomini  
> wrote:
> 
> Hi Igor,
> I would like to highlight to you another discrepancy I noticed while trying 
> to build the cassandra/ignite/ganglia cluster test in AWS using the provided 
> framework. I've build Ignite 1.7 from source, but the version hard-coded for 
> the modules in the cassandra-bootstrap.sh and ignite-bootstrap.sh script is 
> the 1.6. 
> 
> Best Regards
> 
> Riccardo Iacomini
> RDSLab
> 
> 
> On Tue, Nov 29, 2016 at 11:18 PM, Igor Rudyak  > wrote:
> Ok, thanks for the info.
> 
> Igor
> 
> On Tue, Nov 29, 2016 at 12:56 PM, Denis Magda  > wrote:
> Igor,
> 
> The documentation has been moved to the new integrations related site
> https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra 
> 
> 
> You’re free to update it there.
> 
> —
> Denis
> 
>> On Nov 29, 2016, at 11:49 AM, Denis Magda > > wrote:
>> 
>> Igor,
>> 
>> Please hold on for a while. I’ve just started moving all the integrations 
>> related documentation to a new domain. I’ll let you know when it’s safe to 
>> update the doc.
>> 
>> —
>> Denis
>> 
>>> On Nov 29, 2016, at 10:46 AM, Igor Rudyak >> > wrote:
>>> 
>>> Hi Riccardo,
>>> 
>>> Thanks for noticing this. There were number of refactorings done and it 
>>> looks like we need to update the documentation.
>>> 
>>> I'll update the documentation for the module.
>>> 
>>> Igor
>>> 
>>> On Tue, Nov 29, 2016 at 9:26 AM, Denis Magda >> > wrote:
>>> Igor,
>>> 
>>> Would you mind looking through the documentation and updating it whenever 
>>> is needed?
>>> 
>>> —
>>> Denis
>>> 
 On Nov 29, 2016, at 1:58 AM, Riccardo Iacomini 
 > wrote:
 
 Hi Igor,
 I finally discovered what was causing the issue. The example provided 
  in the Ignite documentation 
 page has a subtle difference in the package structure of several classes, 
 for example:
 
 org.apache.ignite.cache.store.cassandra.datasource.DataSource (the class 
 in the ignite-cassandra.jar) vs 
 org.apache.ignite.cache.store.cassandra.utils.datasource.DataSource (in 
 the example). The same applies also for other classes in the example. I do 
 not know if this is due to a refactoring performed on the code which did 
 not propagates through the examples, or simply I used a different version 
 of the documentation in contrast with the one I was running.
 
 Anyway, thank you for your time.
 
 Best regards
 
 Riccardo Iacomini
 RDSLab
 
 
 On Mon, Nov 28, 2016 at 5:34 PM, Igor Rudyak > wrote:
 If you are using ignite.sh it should be fine.
 
 Igor
 
 
 On Nov 28, 2016 8:29 AM, "Riccardo Iacomini" > wrote:
 I will try. Yes I am using the ignite.sh command. Any drawbacks?
 
 Il 28 nov 2016 5:26 PM, "Igor Rudyak" > ha scritto:
 Try to check your classpath. Find ignite process usig something like  ps 
 -es | grep  Ignite and check java command used to launch ignite.
 
 By the way, how you launching Ignite? Do you use ignite.sh script for this?
 
 Igor
 
 
 On Nov 28, 2016 8:05 AM, "Riccardo Iacomini" > wrote:
 Hi Igor,
 I tried your suggestion, but it does find the class. I've also read from 
 the README files that optional modules must be copied into the libs 
 folder: same outcome. I've tried also the docker image, adding my 
 configuration xml files to it, and running the image as specified here 
  in the 
 documentation. Do you have any other suggestion?
 
 Thanks for your patience.
 
 Best regards
 
 Riccardo Iacomini
 RDSLab
 
 
 On Sun, Nov 27, 2016 at 8:46 PM, Igor Rudyak > wrote:
 Try to include required jars into IGNITE_LIBS environment variable.
 
 Igor 
 
 On Thu, Nov 24, 2016 at 5:21 AM, Riccardo Iacomini 
 > wrote:
 

[jira] [Created] (IGNITE-4344) Client node allocates off heap memories for caches

2016-11-30 Thread Nikolay Tikhonov (JIRA)
Nikolay Tikhonov created IGNITE-4344:


 Summary: Client node allocates off heap memories for caches
 Key: IGNITE-4344
 URL: https://issues.apache.org/jira/browse/IGNITE-4344
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.7, 1.6
Reporter: Nikolay Tikhonov
 Fix For: 1.9


Client node should not allocate memories. Test attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: SQL query CPU utilization too low.

2016-11-30 Thread Sergi Vladykin
Cool! I'll take a look today.

Sergi

2016-11-30 18:23 GMT+03:00 Andrey Mashenkov :

> Serj,  you can see a PR attached to jira issue [1], that can be opened with
> upsource [2].
>
> Tanks, I remember about distributed queries and wiil rework them right
> after we come to agreemant that the solution for simple queries is ok.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-4106
> [2] http://reviews.ignite.apache.org/ignite/review/IGNT-CR-15
>
>
>
> On Wed, Nov 30, 2016 at 5:34 PM, Sergi Vladykin 
> wrote:
>
> > Per cache SQL parallelism level looks reasonable to me here.
> >
> > I'm not sure what do you mean about "prepared statement cache is useless
> > with splitted indices", most probably you parallelize queries in some
> wrong
> > way if this is true.
> >
> > Also do not forget about distributed joins: with parallel queries on the
> > same node we will need to make index range requests not only to remote
> > nodes, but to query contexts in parallel threads on the same local node
> as
> > well.
> >
> > Sergi
> >
> > 2016-11-30 17:23 GMT+03:00 Andrey Mashenkov  >:
> >
> > > It looks like we can't just split sql query to several threads due to
> H2
> > > limitations.
> > > We can bound query thread with certain set of partitions, but,
> actually,
> > H2
> > > will read whole index and then filter entries regarding its partition.
> > So,
> > > we can get significant speed-up that way.
> > >
> > > Unfortunatelly, H2 does not support sharding, and we need to have a
> > > workaround. We can try to split indices, so each query thread would be
> > > bounded with its own index part.
> > > I've implemented such prototype and get significant speed up with
> single
> > > node grid as if it was several node grid.
> > > Due to H2 knows nothing about splitted indices, we must bother about
> > every
> > > query should be run as TwoStepQuery and utilize all table index parts.
> > >
> > > As index creation on demand is very heavy operation, index should be
> > > splitted when it is created. So we can set parallelizm level on
> per-cache
> > > base but not per-query.
> > >
> > > Another issue I've faced is that our implementation of prepared
> statement
> > > cache is useless with splitted indices. Prepared statement cached  in
> > > thread local variable and it seems that the statement is bounded with
> > > certain index part. So if we reuse same statement for different index
> > parts
> > > we will get unexpected results.
> > >
> > > On Sun, Oct 30, 2016 at 8:46 PM, Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > > > Completely agree, great point!
> > > >
> > > > On Sun, Oct 30, 2016 at 9:17 AM, Sergi Vladykin <
> > > sergi.vlady...@gmail.com>
> > > > wrote:
> > > >
> > > > > I think it must be a maximum local parallelism level but not just
> > `on`
> > > > and
> > > > > `off` setting (the default is obviously 1). This along with
> > separately
> > > > > configurable query thread pool will give a finer grained control
> over
> > > > > resources.
> > > > >
> > > > > Sergi
> > > > >
> > > > > 2016-10-30 18:22 GMT+03:00 Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >:
> > > > >
> > > > > > I already mentioned this in another email, but we should be able
> to
> > > > turn
> > > > > > this property on and off on per-query and per-cache levels.
> > > > > >
> > > > > > On Sat, Oct 29, 2016 at 11:45 AM, Sergi Vladykin <
> > > > > sergi.vlady...@gmail.com
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Agree, lets implement such a parallelization.
> > > > > > >
> > > > > > > I think we will need an explicit setting for SqlQuery and
> > > > > SqlFieldsQuery,
> > > > > > > the default behavior should not change.
> > > > > > >
> > > > > > > Sergi
> > > > > > >
> > > > > > > 2016-10-28 22:39 GMT+03:00 Andrey Mashenkov <
> > > amashen...@gridgain.com
> > > > >:
> > > > > > >
> > > > > > > > So, now we have every SQL query run on each node in single
> > > thread.
> > > > > This
> > > > > > > can
> > > > > > > > be an issue for heavy queries or queries running on big data
> > > sets,
> > > > > e.g.
> > > > > > > > analytical queries.
> > > > > > > >
> > > > > > > > For now, the only way to speed up such queries is to add more
> > > nodes
> > > > > to
> > > > > > > grid
> > > > > > > > running on same server. In this case, data will be
> partitioned
> > > over
> > > > > all
> > > > > > > > these nodes and query will be split and run on all nodes.
> > > > > > > >
> > > > > > > > It seems, we can have a benefit if split SQL queries locally
> as
> > > we
> > > > do
> > > > > > it
> > > > > > > > across nodes with TwoStepQuery.
> > > > > > > >
> > > > > > > >
> > > > > > > > Thoughts?
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > С уважением,
> > > Машенков Андрей Владимирович
> > > Тел. +7-921-932-61-82
> > >
> > > Best regards,
> > > Andrey V. Mashenkov
> > > Cerr: 

Re: SQL query CPU utilization too low.

2016-11-30 Thread Andrey Mashenkov
Serj,  you can see a PR attached to jira issue [1], that can be opened with
upsource [2].

Tanks, I remember about distributed queries and wiil rework them right
after we come to agreemant that the solution for simple queries is ok.

[1] https://issues.apache.org/jira/browse/IGNITE-4106
[2] http://reviews.ignite.apache.org/ignite/review/IGNT-CR-15



On Wed, Nov 30, 2016 at 5:34 PM, Sergi Vladykin 
wrote:

> Per cache SQL parallelism level looks reasonable to me here.
>
> I'm not sure what do you mean about "prepared statement cache is useless
> with splitted indices", most probably you parallelize queries in some wrong
> way if this is true.
>
> Also do not forget about distributed joins: with parallel queries on the
> same node we will need to make index range requests not only to remote
> nodes, but to query contexts in parallel threads on the same local node as
> well.
>
> Sergi
>
> 2016-11-30 17:23 GMT+03:00 Andrey Mashenkov :
>
> > It looks like we can't just split sql query to several threads due to H2
> > limitations.
> > We can bound query thread with certain set of partitions, but, actually,
> H2
> > will read whole index and then filter entries regarding its partition.
> So,
> > we can get significant speed-up that way.
> >
> > Unfortunatelly, H2 does not support sharding, and we need to have a
> > workaround. We can try to split indices, so each query thread would be
> > bounded with its own index part.
> > I've implemented such prototype and get significant speed up with single
> > node grid as if it was several node grid.
> > Due to H2 knows nothing about splitted indices, we must bother about
> every
> > query should be run as TwoStepQuery and utilize all table index parts.
> >
> > As index creation on demand is very heavy operation, index should be
> > splitted when it is created. So we can set parallelizm level on per-cache
> > base but not per-query.
> >
> > Another issue I've faced is that our implementation of prepared statement
> > cache is useless with splitted indices. Prepared statement cached  in
> > thread local variable and it seems that the statement is bounded with
> > certain index part. So if we reuse same statement for different index
> parts
> > we will get unexpected results.
> >
> > On Sun, Oct 30, 2016 at 8:46 PM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > wrote:
> >
> > > Completely agree, great point!
> > >
> > > On Sun, Oct 30, 2016 at 9:17 AM, Sergi Vladykin <
> > sergi.vlady...@gmail.com>
> > > wrote:
> > >
> > > > I think it must be a maximum local parallelism level but not just
> `on`
> > > and
> > > > `off` setting (the default is obviously 1). This along with
> separately
> > > > configurable query thread pool will give a finer grained control over
> > > > resources.
> > > >
> > > > Sergi
> > > >
> > > > 2016-10-30 18:22 GMT+03:00 Dmitriy Setrakyan  >:
> > > >
> > > > > I already mentioned this in another email, but we should be able to
> > > turn
> > > > > this property on and off on per-query and per-cache levels.
> > > > >
> > > > > On Sat, Oct 29, 2016 at 11:45 AM, Sergi Vladykin <
> > > > sergi.vlady...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Agree, lets implement such a parallelization.
> > > > > >
> > > > > > I think we will need an explicit setting for SqlQuery and
> > > > SqlFieldsQuery,
> > > > > > the default behavior should not change.
> > > > > >
> > > > > > Sergi
> > > > > >
> > > > > > 2016-10-28 22:39 GMT+03:00 Andrey Mashenkov <
> > amashen...@gridgain.com
> > > >:
> > > > > >
> > > > > > > So, now we have every SQL query run on each node in single
> > thread.
> > > > This
> > > > > > can
> > > > > > > be an issue for heavy queries or queries running on big data
> > sets,
> > > > e.g.
> > > > > > > analytical queries.
> > > > > > >
> > > > > > > For now, the only way to speed up such queries is to add more
> > nodes
> > > > to
> > > > > > grid
> > > > > > > running on same server. In this case, data will be partitioned
> > over
> > > > all
> > > > > > > these nodes and query will be split and run on all nodes.
> > > > > > >
> > > > > > > It seems, we can have a benefit if split SQL queries locally as
> > we
> > > do
> > > > > it
> > > > > > > across nodes with TwoStepQuery.
> > > > > > >
> > > > > > >
> > > > > > > Thoughts?
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > С уважением,
> > Машенков Андрей Владимирович
> > Тел. +7-921-932-61-82
> >
> > Best regards,
> > Andrey V. Mashenkov
> > Cerr: +7-921-932-61-82
> >
>



-- 
С уважением,
Машенков Андрей Владимирович
Тел. +7-921-932-61-82

Best regards,
Andrey V. Mashenkov
Cerr: +7-921-932-61-82


[jira] [Created] (IGNITE-4343) Check mutable entries for existence properly inside DML entry processors

2016-11-30 Thread Alexander Paschenko (JIRA)
Alexander Paschenko created IGNITE-4343:
---

 Summary: Check mutable entries for existence properly inside DML 
entry processors
 Key: IGNITE-4343
 URL: https://issues.apache.org/jira/browse/IGNITE-4343
 Project: Ignite
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.8
Reporter: Alexander Paschenko
Assignee: Alexander Paschenko
 Fix For: 1.8






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request #1303: IGNITE-4340 Convert results of sub SELECTs to exp...

2016-11-30 Thread alexpaschenko
GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/1303

IGNITE-4340 Convert results of sub SELECTs to expected column types



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4340

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1303.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1303


commit 2991fb45e8d6c06de216c0fa9dcd98f843b43bf8
Author: Alexander Paschenko 
Date:   2016-11-30T15:05:06Z

IGNITE-4340 Convert results of sub SELECTs to expected column types inside 
INSERT, DELETE, and MERGE




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: SQL: Table aliases not supported for SqlQuery

2016-11-30 Thread Sergi Vladykin
I don't mind to have an alias in SqlQuery, but it is better to add setter
method setAlias instead of having an additional constructor with signature
(String,String,String).

Sergi

2016-11-30 17:55 GMT+03:00 Andrey Mashenkov :

> Hi Igniters,
>
> H2Indexing.generateQuery() generates wrong sql query. It is used in
> SqlQuery class for queries like this:
>  new SqlQuery(Person.class. "from Person p where p.salary > ? and
> p.salary <= ?")
> This query produce next sql query string:
> "SELECT "".Person._key, ""Person"._val FROM Person p where p.salary > ?
> and p.salary <= ?"
> We should use table alias instead on table name in "SELECT" query part. It
> looks like we can't automatically determine correct alias, as we can have
> multiple aliases for one table or even subquery in "FROM" part.
>
> The solution is to provide table alias SqlQuery object to generate correct
> query. SqlQuery is ignite public class.
>
> Is it ok, if I add new constructor in SqlQuery class?
>


SQL: Table aliases not supported for SqlQuery

2016-11-30 Thread Andrey Mashenkov
Hi Igniters,

H2Indexing.generateQuery() generates wrong sql query. It is used in
SqlQuery class for queries like this:
 new SqlQuery(Person.class. "from Person p where p.salary > ? and
p.salary <= ?")
This query produce next sql query string:
"SELECT "".Person._key, ""Person"._val FROM Person p where p.salary > ?
and p.salary <= ?"
We should use table alias instead on table name in "SELECT" query part. It
looks like we can't automatically determine correct alias, as we can have
multiple aliases for one table or even subquery in "FROM" part.

The solution is to provide table alias SqlQuery object to generate correct
query. SqlQuery is ignite public class.

Is it ok, if I add new constructor in SqlQuery class?


[GitHub] ignite pull request #1302: Ignite 4341

2016-11-30 Thread iveselovskiy
GitHub user iveselovskiy opened a pull request:

https://github.com/apache/ignite/pull/1302

Ignite 4341



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4341

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1302.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1302


commit b038730ee56a662f73e02bbec83eb1712180fa82
Author: isapego 
Date:   2016-11-23T09:05:54Z

IGNITE-4249: ODBC: Fixed performance issue caused by ineddicient IO 
handling on CPP side. This closes #1254.

commit 7a47a0185d308cd3a58c7bfcb4d1cd548bff5b87
Author: devozerov 
Date:   2016-11-24T08:14:08Z

IGNITE-4270: Allow GridUnsafe.UNALIGNED flag override.

commit bf330251734018467fa3291fccf0414c9da7dd1b
Author: Andrey Novikov 
Date:   2016-11-24T10:08:08Z

Web console beta-6.

commit 7d88c5bfe7d6f130974fab1ed4266fff859afd3d
Author: Andrey Novikov 
Date:   2016-11-24T10:59:33Z

Web console beta-6. Minor fix.

commit 9c6824b4f33fbdead64299d9e0c34365d5d4a570
Author: nikolay_tikhonov 
Date:   2016-11-24T13:27:05Z

IGNITE-3958 Fixed "Client node should not start rest processor".

commit 56998e704e9a67760c70481c10c56e72c0a866bb
Author: Konstantin Dudkov 
Date:   2016-10-28T13:27:34Z

ignite-4088 Added methods to create/destroy multiple caches. This closes 
#1174.

(cherry picked from commit f445e7b)

commit 3e2ccfd30427ba0552eea8667c0129ae5ace9c0b
Author: Igor Sapego 
Date:   2016-11-25T11:26:54Z

IGNITE-4299: Fixes for examples.

commit 6fbaef45af8f40062a95058df7ec0984c99035b9
Author: Konstantin Dudkov 
Date:   2016-11-25T10:58:58Z

IGNITE-4305 marshalling fix in GridNearAtomicSingleUpdateInvokeRequest

commit 1a2de51f5807a91ce0d5dff28f24ed5bf7abebbc
Author: Konstantin Dudkov 
Date:   2016-11-28T09:59:02Z

IGNITE-4305 marshalling fix

commit c06e4017771603df7118974758d3d6b9cadc41b5
Author: Eduard Shangareev 
Date:   2016-11-30T11:34:47Z

ignite-4332 Usage of cache.getEntry inside GridCacheQueryManager.runQuery 
causes to remote operations

commit 14ddf5333103b5cf1aa14a6b39c82a5c474975c9
Author: iveselovskiy 
Date:   2016-11-30T14:51:52Z

IGNITE-4341: added TeraSort as unit test. DistributedCache is copying files 
now even if there is no staging dir at all -- in test mode this is the case.

commit 801450e49a67cd705996b6f0fac43389ed680e14
Author: iveselovskiy 
Date:   2016-11-30T14:54:27Z

IGNITE-4341: removed TODO.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: SQL query CPU utilization too low.

2016-11-30 Thread Sergi Vladykin
Per cache SQL parallelism level looks reasonable to me here.

I'm not sure what do you mean about "prepared statement cache is useless
with splitted indices", most probably you parallelize queries in some wrong
way if this is true.

Also do not forget about distributed joins: with parallel queries on the
same node we will need to make index range requests not only to remote
nodes, but to query contexts in parallel threads on the same local node as
well.

Sergi

2016-11-30 17:23 GMT+03:00 Andrey Mashenkov :

> It looks like we can't just split sql query to several threads due to H2
> limitations.
> We can bound query thread with certain set of partitions, but, actually, H2
> will read whole index and then filter entries regarding its partition. So,
> we can get significant speed-up that way.
>
> Unfortunatelly, H2 does not support sharding, and we need to have a
> workaround. We can try to split indices, so each query thread would be
> bounded with its own index part.
> I've implemented such prototype and get significant speed up with single
> node grid as if it was several node grid.
> Due to H2 knows nothing about splitted indices, we must bother about every
> query should be run as TwoStepQuery and utilize all table index parts.
>
> As index creation on demand is very heavy operation, index should be
> splitted when it is created. So we can set parallelizm level on per-cache
> base but not per-query.
>
> Another issue I've faced is that our implementation of prepared statement
> cache is useless with splitted indices. Prepared statement cached  in
> thread local variable and it seems that the statement is bounded with
> certain index part. So if we reuse same statement for different index parts
> we will get unexpected results.
>
> On Sun, Oct 30, 2016 at 8:46 PM, Dmitriy Setrakyan 
> wrote:
>
> > Completely agree, great point!
> >
> > On Sun, Oct 30, 2016 at 9:17 AM, Sergi Vladykin <
> sergi.vlady...@gmail.com>
> > wrote:
> >
> > > I think it must be a maximum local parallelism level but not just `on`
> > and
> > > `off` setting (the default is obviously 1). This along with separately
> > > configurable query thread pool will give a finer grained control over
> > > resources.
> > >
> > > Sergi
> > >
> > > 2016-10-30 18:22 GMT+03:00 Dmitriy Setrakyan :
> > >
> > > > I already mentioned this in another email, but we should be able to
> > turn
> > > > this property on and off on per-query and per-cache levels.
> > > >
> > > > On Sat, Oct 29, 2016 at 11:45 AM, Sergi Vladykin <
> > > sergi.vlady...@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > Agree, lets implement such a parallelization.
> > > > >
> > > > > I think we will need an explicit setting for SqlQuery and
> > > SqlFieldsQuery,
> > > > > the default behavior should not change.
> > > > >
> > > > > Sergi
> > > > >
> > > > > 2016-10-28 22:39 GMT+03:00 Andrey Mashenkov <
> amashen...@gridgain.com
> > >:
> > > > >
> > > > > > So, now we have every SQL query run on each node in single
> thread.
> > > This
> > > > > can
> > > > > > be an issue for heavy queries or queries running on big data
> sets,
> > > e.g.
> > > > > > analytical queries.
> > > > > >
> > > > > > For now, the only way to speed up such queries is to add more
> nodes
> > > to
> > > > > grid
> > > > > > running on same server. In this case, data will be partitioned
> over
> > > all
> > > > > > these nodes and query will be split and run on all nodes.
> > > > > >
> > > > > > It seems, we can have a benefit if split SQL queries locally as
> we
> > do
> > > > it
> > > > > > across nodes with TwoStepQuery.
> > > > > >
> > > > > >
> > > > > > Thoughts?
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> --
> С уважением,
> Машенков Андрей Владимирович
> Тел. +7-921-932-61-82
>
> Best regards,
> Andrey V. Mashenkov
> Cerr: +7-921-932-61-82
>


[jira] [Created] (IGNITE-4342) DML: clear() on atomic cache causes the exception from previous failed DML statement

2016-11-30 Thread Sergey Kozlov (JIRA)
Sergey Kozlov created IGNITE-4342:
-

 Summary: DML: clear() on atomic cache causes the exception from 
previous failed DML statement
 Key: IGNITE-4342
 URL: https://issues.apache.org/jira/browse/IGNITE-4342
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 1.8
Reporter: Sergey Kozlov
 Fix For: 1.8


0. Extact the attachment into {{examples}} directory
1. Start {{ExtSqlExampleNodeStartup}}
2. Start {{ExtSqlAtomicExample}}. It fails:
{noformat}
[17:22:23,763][INFO ][main][GridDiscoveryManager] Topology snapshot [ver=6, 
servers=1, clients=1, CPUs=8, heap=2.0GB]
The cache size atomic-part-full-sync: 0
Preloading ... 
The cache size atomic-part-full-sync: 1000
Update failed
Clear cache atomic-part-full-sync
[17:22:25,207][ERROR][mgmt-#60%null%][GridTaskWorker] Failed to obtain remote 
job result policy for result from ComputeTask.result(..) method (will fail the 
whole task): GridJobResultImpl 
[job=o.a.i.i.processors.cache.GridCacheAdapter$GlobalClearAllJob@5dca5ddf, 
sib=GridJobSiblingImpl [sesId=3fe2d95b851-3bc6c31d-7339-434a-b0cd-2edda611fc19, 
jobId=4fe2d95b851-3bc6c31d-7339-434a-b0cd-2edda611fc19, 
nodeId=87311db5-3911-41f6-ac47-e739031a91dd, isJobDone=false], 
jobCtx=GridJobContextImpl 
[jobId=4fe2d95b851-3bc6c31d-7339-434a-b0cd-2edda611fc19, timeoutObj=null, 
attrs={}], node=TcpDiscoveryNode [id=87311db5-3911-41f6-ac47-e739031a91dd, 
addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 172.22.222.137, 172.25.4.107], 
sockAddrs=[/172.22.222.137:47500, /172.25.4.107:47500, /127.0.0.1:47500, 
/0:0:0:0:0:0:0:1:47500], discPort=47500, order=1, intOrder=1, 
lastExchangeTime=1480515743167, loc=false, ver=1.8.0#20161128-sha1:a53fd38c, 
isClient=false], ex=class o.a.i.compute.ComputeUserUndeclaredException: Failed 
to execute job due to unexpected runtime exception 
[jobId=4fe2d95b851-3bc6c31d-7339-434a-b0cd-2edda611fc19, ses=GridJobSessionImpl 
[ses=GridTaskSessionImpl 
[taskName=o.a.i.i.processors.cache.GridCacheAdapter$ClearTask, 
dep=LocalDeployment [super=GridDeployment [ts=1480515673415, depMode=SHARED, 
clsLdr=sun.misc.Launcher$AppClassLoader@2e5f8245, 
clsLdrId=ef34c95b851-87311db5-3911-41f6-ac47-e739031a91dd, userVer=0, loc=true, 
sampleClsName=java.lang.String, pendingUndeploy=false, undeployed=false, 
usage=0]], taskClsName=o.a.i.i.processors.cache.GridCacheAdapter$ClearTask, 
sesId=3fe2d95b851-3bc6c31d-7339-434a-b0cd-2edda611fc19, 
startTime=1480515745181, endTime=9223372036854775807, 
taskNodeId=3bc6c31d-7339-434a-b0cd-2edda611fc19, 
clsLdr=sun.misc.Launcher$AppClassLoader@2e5f8245, closed=false, cpSpi=null, 
failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=true, 
subjId=3bc6c31d-7339-434a-b0cd-2edda611fc19, mapFut=IgniteFuture 
[orig=GridFutureAdapter [resFlag=0, res=null, startTime=1480515745181, 
endTime=0, ignoreInterrupts=false, state=INIT]]], 
jobId=4fe2d95b851-3bc6c31d-7339-434a-b0cd-2edda611fc19]], hasRes=true, 
isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw exception.
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$ClearTask.result(GridCacheAdapter.java:6800)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1030)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1023)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6596)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1023)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:841)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:996)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1221)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1082)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:710)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:102)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:673)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.compute.ComputeUserUndeclaredException: 
Failed to execute job due to unexpected runtime exception 
[jobId=4fe2d95b851-3bc6c31d-7339-434a-b0cd-2edda611fc19, ses=GridJobSessionImpl 
[ses=GridTaskSessionImpl 

Re: SQL query CPU utilization too low.

2016-11-30 Thread Andrey Mashenkov
It looks like we can't just split sql query to several threads due to H2
limitations.
We can bound query thread with certain set of partitions, but, actually, H2
will read whole index and then filter entries regarding its partition. So,
we can get significant speed-up that way.

Unfortunatelly, H2 does not support sharding, and we need to have a
workaround. We can try to split indices, so each query thread would be
bounded with its own index part.
I've implemented such prototype and get significant speed up with single
node grid as if it was several node grid.
Due to H2 knows nothing about splitted indices, we must bother about every
query should be run as TwoStepQuery and utilize all table index parts.

As index creation on demand is very heavy operation, index should be
splitted when it is created. So we can set parallelizm level on per-cache
base but not per-query.

Another issue I've faced is that our implementation of prepared statement
cache is useless with splitted indices. Prepared statement cached  in
thread local variable and it seems that the statement is bounded with
certain index part. So if we reuse same statement for different index parts
we will get unexpected results.

On Sun, Oct 30, 2016 at 8:46 PM, Dmitriy Setrakyan 
wrote:

> Completely agree, great point!
>
> On Sun, Oct 30, 2016 at 9:17 AM, Sergi Vladykin 
> wrote:
>
> > I think it must be a maximum local parallelism level but not just `on`
> and
> > `off` setting (the default is obviously 1). This along with separately
> > configurable query thread pool will give a finer grained control over
> > resources.
> >
> > Sergi
> >
> > 2016-10-30 18:22 GMT+03:00 Dmitriy Setrakyan :
> >
> > > I already mentioned this in another email, but we should be able to
> turn
> > > this property on and off on per-query and per-cache levels.
> > >
> > > On Sat, Oct 29, 2016 at 11:45 AM, Sergi Vladykin <
> > sergi.vlady...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Agree, lets implement such a parallelization.
> > > >
> > > > I think we will need an explicit setting for SqlQuery and
> > SqlFieldsQuery,
> > > > the default behavior should not change.
> > > >
> > > > Sergi
> > > >
> > > > 2016-10-28 22:39 GMT+03:00 Andrey Mashenkov  >:
> > > >
> > > > > So, now we have every SQL query run on each node in single thread.
> > This
> > > > can
> > > > > be an issue for heavy queries or queries running on big data sets,
> > e.g.
> > > > > analytical queries.
> > > > >
> > > > > For now, the only way to speed up such queries is to add more nodes
> > to
> > > > grid
> > > > > running on same server. In this case, data will be partitioned over
> > all
> > > > > these nodes and query will be split and run on all nodes.
> > > > >
> > > > > It seems, we can have a benefit if split SQL queries locally as we
> do
> > > it
> > > > > across nodes with TwoStepQuery.
> > > > >
> > > > >
> > > > > Thoughts?
> > > > >
> > > >
> > >
> >
>



-- 
С уважением,
Машенков Андрей Владимирович
Тел. +7-921-932-61-82

Best regards,
Andrey V. Mashenkov
Cerr: +7-921-932-61-82


[jira] [Created] (IGNITE-4341) Add TeraSort example as a unit test to Ignite

2016-11-30 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-4341:
---

 Summary: Add TeraSort example as a unit test to Ignite
 Key: IGNITE-4341
 URL: https://issues.apache.org/jira/browse/IGNITE-4341
 Project: Ignite
  Issue Type: Test
  Components: hadoop
Affects Versions: 1.7
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky
 Fix For: 1.8


Add canonical TeraSort example as a unit test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4340) Implicitly cast new column values to expected types on SQL UPDATE

2016-11-30 Thread Alexander Paschenko (JIRA)
Alexander Paschenko created IGNITE-4340:
---

 Summary: Implicitly cast new column values to expected types on 
SQL UPDATE
 Key: IGNITE-4340
 URL: https://issues.apache.org/jira/browse/IGNITE-4340
 Project: Ignite
  Issue Type: Improvement
  Components: SQL
Affects Versions: 1.8
Reporter: Alexander Paschenko
Assignee: Alexander Paschenko
 Fix For: 1.8


When the following query is run,

{code:sql}
update AllTypes set longCol = 1 where _key = ?
{code}

it fails with exception

{noformat}
Suppressed: java.lang.ClassCastException: java.lang.Integer cannot be cast to 
java.lang.Long
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$RowDescriptor.wrap(IgniteH2Indexing.java:2960)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2AbstractKeyValueRow.getValue(GridH2AbstractKeyValueRow.java:316)
at 
org.h2.index.BaseIndex.compareRows(BaseIndex.java:294)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2TreeIndex$2.compare(GridH2TreeIndex.java:103)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2TreeIndex$2.compare(GridH2TreeIndex.java:95)
at 
java.util.concurrent.ConcurrentSkipListMap$ComparableUsingComparator.compareTo(ConcurrentSkipListMap.java:647)
at 
java.util.concurrent.ConcurrentSkipListMap.findPredecessor(ConcurrentSkipListMap.java:727)
at 
java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:850)
at 
java.util.concurrent.ConcurrentSkipListMap.put(ConcurrentSkipListMap.java:1645)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2TreeIndex.put(GridH2TreeIndex.java:362)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:566)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:495)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:603)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:737)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:431)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:4019)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2458)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2385)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1787)
{noformat}

It's due to that UPDATE's SELECT part selects 1 as int, and that's what we're 
setting to field. Problem can be solved by casting SELECTed values to the types 
that columns expect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4339) .NET: Execute .NET code on Java-only nodes

2016-11-30 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-4339:
--

 Summary: .NET: Execute .NET code on Java-only nodes
 Key: IGNITE-4339
 URL: https://issues.apache.org/jira/browse/IGNITE-4339
 Project: Ignite
  Issue Type: New Feature
  Components: platforms
Reporter: Pavel Tupitsyn


The idea is to encode simple predicates (query filters, event filters, etc) in 
some format which will be decoded and executed on Java nodes without the need 
for any classes and so on.

Examples of such predicates are "Compare certain field to a value and another 
field to another value".

User-facing API can operate on Expression Trees or provide some kind of a 
predicate builder with fluent syntax.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request #1301: IGNITE-3964: SQL: implement support for table ali...

2016-11-30 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/1301

IGNITE-3964: SQL: implement support for table alias

Custom table name can be set for QueryEntity.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3964

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1301.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1301


commit 1d03c30f6254076a6fafb280678119cdd3940ced
Author: Andrey V. Mashenkov 
Date:   2016-11-29T16:44:26Z

Added table alias




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #1300: Ignite 4223

2016-11-30 Thread kdudkov
GitHub user kdudkov opened a pull request:

https://github.com/apache/ignite/pull/1300

Ignite 4223



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4223

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/1300.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1300


commit bfb00b6e61f9709718c30971997aeb0ac79e86b4
Author: Alexandr Kuramshin 
Date:   2016-11-18T20:12:28Z

IgniteTcpCommunicationBigClusterTest added

commit 02dd92e605b9b53f5a16c7ec5f8e7b5698b15ba4
Author: Alexandr Kuramshin 
Date:   2016-11-18T21:55:37Z

IgniteTcpCommunicationBigClusterTest update

commit 6acf193a3d356d1bad4c02a53ac76833ed1008d0
Author: Alexandr Kuramshin 
Date:   2016-11-19T09:55:45Z

Have got TcpCommunicationSpi error

commit 4fd39653d24f62f19f70b4dffba8497185cc46fb
Author: Alexandr Kuramshin 
Date:   2016-11-19T16:39:10Z

Some discovery have been done

commit c2c181922c7c24ea457577e32d2af897c8bec87f
Author: Alexandr Kuramshin 
Date:   2016-11-19T20:11:28Z

Prove that problem is not in the onFirstMessage hang

commit f8076edba097f6077229b2090ee3ff1a3369878c
Author: Alexandr Kuramshin 
Date:   2016-11-19T20:26:37Z

Revert: Prove that problem is not in the onFirstMessage hang

commit 6e1f2dfc2acb3dbb8f24aa51ed67b2ee447b4585
Author: Alexandr Kuramshin 
Date:   2016-11-21T08:55:09Z

Revert: pushing unnecessary changes to the master

commit ed794ca815f6bb1471af15779279d287576b39cc
Author: Alexandr Kuramshin 
Date:   2016-11-21T09:08:00Z

Revert: pushing unnecessary changes to the master

commit 3fb723d9e693141830fc3bcff96408f0558fa695
Author: Konstantin Dudkov 
Date:   2016-11-24T09:49:34Z

IGNITE-4223

commit a3310154203d92b0ad953636406ca6fae38b7573
Author: Konstantin Dudkov 
Date:   2016-11-25T09:05:47Z

IGNITE-4223 wip

commit b8ae8937c8ad0ea0e5037c4a5c339c22a63aaf08
Author: Konstantin Dudkov 
Date:   2016-11-28T16:28:19Z

IGNITE-4223

commit ef7f115050189952e6086492d21cba29679ed676
Author: Konstantin Dudkov 
Date:   2016-11-29T10:21:45Z

IGNITE-4223

commit c3af4d93592d5b7bf2f32ade65cb30cad68bbdf2
Author: Konstantin Dudkov 
Date:   2016-11-29T13:39:36Z

Merge remote-tracking branch 'apache/master' into ignite-4223

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/managers/communication/GridIoMessageFactory.java
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/cache/CacheAffinitySharedManager.java

commit 5c01b040b2a9e6fc7d65577f947d565984401ffb
Author: Konstantin Dudkov 
Date:   2016-11-29T13:56:41Z

IGNITE-4223 merge fix

commit 6dc69b07a99b45eeadc9e234fe3258724f5a6534
Author: Konstantin Dudkov 
Date:   2016-11-29T14:21:28Z

IGNITE-4223




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-4338) Cross-schema SQL SELECT on partitioned cache fails for no good reason

2016-11-30 Thread Alexander Paschenko (JIRA)
Alexander Paschenko created IGNITE-4338:
---

 Summary: Cross-schema SQL SELECT on partitioned cache fails for no 
good reason
 Key: IGNITE-4338
 URL: https://issues.apache.org/jira/browse/IGNITE-4338
 Project: Ignite
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.8
Reporter: Alexander Paschenko
 Fix For: 2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4337) Introduce persistence interface to allow build reliable persistence plugins

2016-11-30 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-4337:


 Summary: Introduce persistence interface to allow build reliable 
persistence plugins
 Key: IGNITE-4337
 URL: https://issues.apache.org/jira/browse/IGNITE-4337
 Project: Ignite
  Issue Type: Sub-task
  Components: general
Reporter: Alexey Goncharuk
 Fix For: 2.0


If page memory interface is introduced, it may be possible to build a 
persistence layer around this architecture. I think we should add some form of 
persistence logging to allow us build a crash-resistant system in future.

Something like
{code}
public interface IgniteWriteAheadLogManager extends GridCacheSharedManager {
/**
 * @return {@code true} If we have to always write full pages.
 */
public boolean isAlwaysWriteFullPages();

/**
 * @return {@code true} if WAL will perform fair syncs on fsync call.
 */
public boolean isFullSync();

/**
 * Resumes logging after start. When WAL manager is started, it will skip 
logging any updates until this
 * method is called to avoid logging changes induced by the state restore 
procedure.
 */
public void resumeLogging(WALPointer lastWrittenPtr) throws 
IgniteCheckedException;

/**
 * Appends the given log entry to the write-ahead log.
 *
 * @param entry entry to log.
 * @return WALPointer that may be passed to {@link #fsync(WALPointer)} 
method to make sure the record is
 *  written to the log.
 * @throws IgniteCheckedException If failed to construct log entry.
 * @throws StorageException If IO error occurred while writing log entry.
 */
public WALPointer log(WALRecord entry) throws IgniteCheckedException, 
StorageException;

/**
 * Makes sure that all log entries written to the log up until the 
specified pointer are actually persisted to
 * the underlying storage.
 *
 * @param ptr Optional pointer to sync. If {@code null}, will sync up to 
the latest record.
 * @throws IgniteCheckedException If
 * @throws StorageException
 */
public void fsync(WALPointer ptr) throws IgniteCheckedException, 
StorageException;

/**
 * Invoke this method to iterate over the written log entries.
 *
 * @param start Optional WAL pointer from which to start iteration.
 * @return Records iterator.
 * @throws IgniteException If failed to start iteration.
 * @throws StorageException If IO error occurred while reading WAL entries.
 */
public WALIterator replay(WALPointer start) throws IgniteCheckedException, 
StorageException;

/**
 * Gives a hint to WAL manager to clear entries logged before the given 
pointer. Some entries before the
 * the given pointer will be kept because there is a configurable WAL 
history size. Those entries may be used
 * for partial partition rebalancing.
 *
 * @param ptr Pointer for which it is safe to clear the log.
 * @return Number of deleted WAL segments.
 */
public int truncate(WALPointer ptr);
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4336) Manual rebalance can't be requested twice

2016-11-30 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-4336:
-

 Summary: Manual rebalance can't be requested twice
 Key: IGNITE-4336
 URL: https://issues.apache.org/jira/browse/IGNITE-4336
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Dmitriy Setrakyan
 Fix For: 2.0


Manual rebalancing with ignite.rebalance().get() immediately returns on second 
try without requesting rebalancing.
How to reproduce:
Use branch ignite-gg-rebalance-test 
revision:c4f192e276ff4d0dad39611a60a39c21594c7320
Run 3 Ignite nodes with the params:
1) 200 0 1073741824 c:/data/db
2) 200 0 1073741824 c:/data/db
3) 200 0 1073741824 c:/data/db 10 1024
Wait until data is loaded in 3-rd node.
Rebalance is requested on first try and immediately returns on second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4335) Implement cluster ACTIVE/INACTIVE state

2016-11-30 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-4335:


 Summary: Implement cluster ACTIVE/INACTIVE state
 Key: IGNITE-4335
 URL: https://issues.apache.org/jira/browse/IGNITE-4335
 Project: Ignite
  Issue Type: New Feature
  Components: general
Reporter: Alexey Goncharuk
 Fix For: 2.0


I think it might be beneficial in some cases to start cluster in some form of 
inactive state. In this case we can skip start of many managers and processors 
(basically, start only discovery + communication), which should speed up the 
cluster start process.

Once all nodes are started, an API call or a shell command can activate the 
cluster and start all managers in one pass. This should significantly reduce 
traffic, affinity calculations, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)