Re: Task failover in ignite

2018-10-01 Thread vkulichenko
Prasad,


prasadbhalerao1983 wrote
> Are you saying that when a primary node dies the former backup node
> becomes
> new primary for ALL backup partitions present on it and only primary
> partitions are moved in rebalancing process?

Not for all partitions, but only for those for which primary copy existed on
the recently failed node. For example, you have nodes A, B and C. Partition
#1 has primary copy on A and backup on B. Partition #2 has primary copy on C
and backup on B. If A dies, B becomes primary node for partition #1, and
then rebalancing creates a new backup for it on C. However, nothing changes
for partition #2, because A didn't own any of its copies.

prasadbhalerao1983 wrote
> Partition to node mapping is defined by affinity function which uses
> Rendezvous Hashing. How does it make sure that back up partitions on a
> node
> becomes primary partitions on same node?

That's how the function is implemented. If you're interested in more
details, I would suggest you to go through the code of
RendezvousAffinityFunction.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Task failover in ignite

2018-10-01 Thread Prasad Bhalerao
Hi Val,

Thank you for the explanation.

Are you saying that when a primary node dies the former backup node becomes
new primary for ALL backup partitions present on it and only primary
partitions are moved in rebalancing process?

Partition to node mapping is defined by affinity function which uses
Rendezvous Hashing. How does it make sure that back up partitions on a node
becomes primary partitions on same node?



Regards,
Prasad


On Tue, Oct 2, 2018, 3:16 AM vkulichenko 
wrote:

> Prasad,
>
> When a primary node for a partition dies, former backup node for this
> partition becomes new primary. Therefore there is no need to wait for
> rebalancing in this case, data is already there. By default job will be
> automatically remapped to that node, but with 'withNoFailover()' you'll
> have
> to retry manually.
>
> In addition, affinityRun/Call acquires a partition lock for a duration of
> computation to make sure data is not moved until it's completed. I.e. if
> there is a new node that becomes a new owner for this partition, data for
> this partition will not be evicted from previous primary until all
> collocated computations are done.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Task failover in ignite

2018-10-01 Thread vkulichenko
Prasad,

When a primary node for a partition dies, former backup node for this
partition becomes new primary. Therefore there is no need to wait for
rebalancing in this case, data is already there. By default job will be
automatically remapped to that node, but with 'withNoFailover()' you'll have
to retry manually.

In addition, affinityRun/Call acquires a partition lock for a duration of
computation to make sure data is not moved until it's completed. I.e. if
there is a new node that becomes a new owner for this partition, data for
this partition will not be evicted from previous primary until all
collocated computations are done.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Clarifying aspects of write ahead logs

2018-10-01 Thread Dmitriy Pavlov
Hi Raymond,

WAL Archive can be used for recovery during node restart. It can happen if
WAL data during one checkpoint is longer than segmentSize*workSegments
(640Mb by default).

The downside of not having archiver is following: Space allocation for
a file may require significant time in some file systems. If we archive one
WAL segment (asynchronously) we can later reuse this segment file in the
work directory for a new segment records. The file is now preallocated,
there will be no performance penalty for step by step increasing of its
volume.

Sincerely,
Dmitriy Pavlov

You can find scheme of WAL rotation here:
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-WALstructure


пн, 1 окт. 2018 г. в 22:21, Raymond Wilson :

> Hi,
>
>
>
> I’ve been reading through this page (
> https://apacheignite.readme.io/docs/write-ahead-log) on WAL files in
> Ignite.
>
>
>
> I’d like to understand what the purpose of the WAL archive is. I
> understand from the documentation that the WAL archive contains data that
> has been checkpointed, and so safely committed to the cache partition files.
>
>
>
> It seems to be optional in that it is possible to not have one by setting
> the WAL active and archive paths to be the same location. The documentation
> recommends doing this if ingest volumes mean time spent copying WAL files
> to the archive can cause node freezes.
>
>
>
> Is the WAL archive used in recovery operations when an Ignite node
> restarts? Is there a downside to not having WAL segments archived?
>
>
>
> Thanks,
>
> Raymond.
>
>
>


Clarifying aspects of write ahead logs

2018-10-01 Thread Raymond Wilson
Hi,



I’ve been reading through this page (
https://apacheignite.readme.io/docs/write-ahead-log) on WAL files in Ignite.



I’d like to understand what the purpose of the WAL archive is. I understand
from the documentation that the WAL archive contains data that has been
checkpointed, and so safely committed to the cache partition files.



It seems to be optional in that it is possible to not have one by setting
the WAL active and archive paths to be the same location. The documentation
recommends doing this if ingest volumes mean time spent copying WAL files
to the archive can cause node freezes.



Is the WAL archive used in recovery operations when an Ignite node
restarts? Is there a downside to not having WAL segments archived?



Thanks,

Raymond.


Re: Setting performance expectations

2018-10-01 Thread Denis Magda
ehCache even doesn's serialize data, reading it directly from Java
heap once you allocate your objects. It's designed for local mode use cases
like HashMap.

Compare ehCache to Ignite in the distributed mode. Ignite was designed and
optimized to work at scale.

--
Denis

On Mon, Oct 1, 2018 at 7:00 AM Daryl Stultz 
wrote:

> It means ehCache is better than ignite performance atleast in LOCAL mode
>  :)
>
>
> I fully expect ehCache, Guava Cache, etc to be faster than Ignite since
> Ignite has more to do to communicate with the grid. But the scale of the
> difference is the concern, Thus far my experiments do not show it's
> appreciably faster than reading from the database.
>
> --
> Daryl Stultz
> Principal Software Developer
> _
> OpenTempo, Inc
> http://www.opentempo.com
> mailto:daryl.stu...@opentempo.com
>


Re: ComplexObjects via REST API?

2018-10-01 Thread Denis Magda
Ilya, folks,

This needs to be documented. Do we have a ticket?

--
Denis

On Sun, Sep 30, 2018 at 11:07 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Yes, into JSON.
>
> Unfortunately I don't have any ready to use examples. This is the
> interface:
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ConnectorMessageInterceptor.html
> You need to put it into ConnectorConfiguration, which you should put into
> IgniteConfiguration.
>
> I also suggest that this should be taken to userlist instead of devlist.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 1 окт. 2018 г. в 8:46, Pavel Petroshenko :
>
>> Thanks, Ilya.
>>
>> > REST API will convert complex BinaryObjects into REST by default
>>
>> You mean "into JSON"?
>>
>> Is this documented anywhere? Any examples that you can share a link to?
>>
>> P.
>>
>> On Sun, Sep 30, 2018 at 10:06 PM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Yes, this is supported. REST API will convert complex BinaryObjects into
>>> REST by default. But to put such objects via REST you will need to define
>>> your own ConnectorMessageInterceptor and plug it in. You will need to
>>> define string to entity mapping in onReceive. You can leave onSend
>>> returning arg.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 1 окт. 2018 г. в 3:51, Pavel Petroshenko :
>>>
>>> > Igniters,
>>> >
>>> > Is there a way to put user defined objects (in BinaryObject or whatever
>>> > another format) via the REST APIs [1]? According to the docs it's not
>>> > supported, but I wonder if by any chance it's implemented somewhere
>>> and is
>>> > just not officially released yet...
>>> >
>>> > Thanks,
>>> > Pavel
>>> >
>>> > [1] https://apacheignite.readme.io/docs/rest-api
>>> >
>>>
>>


Re: ignite .net index group definition

2018-10-01 Thread Pavel Tupitsyn
You should not have to revert to Spring XML.
Just use QueryIndex class in C# and pass field names there in desired order
to create a group index.

new CacheConfiguration(...)
{
Indexes = new[]
{
new QueryIndex("Age", "Size")
}
}




On Mon, Oct 1, 2018 at 6:10 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> You can also use CREATE TABLE/INDEX and then any API that you like to
> query data (i.e. SQL, Cache API, REST).
>
> How the cache was created doesn't affect its usage much.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 1 окт. 2018 г. в 11:07, wt :
>
>> So are you saying that i have to revert to the spring xml to get this
>> done?
>>
>> I am using query entities when trying to do this but the order field
>> doesn't
>> seem to apply in the .net API.
>>
>>  foreach (var field in item.Columns)
>> {
>> properties.Add(new Property()
>> {
>> Name = field.ColumnName,
>> Type = ChangeType(field.DataType), --data type
>> Attributes = new List()
>> { MakeIndexConfig(field)} -- here is where i set
>> the
>> fields and indexes
>> });
>>
>> }
>>
>> I guess another option might be to use the rest service and issue create
>> index statements.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Catch ComputeJob failures on the offending Node.

2018-10-01 Thread Chris Berry
Thank you!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite .net index group definition

2018-10-01 Thread Ilya Kasnacheev
Hello!

You can also use CREATE TABLE/INDEX and then any API that you like to query
data (i.e. SQL, Cache API, REST).

How the cache was created doesn't affect its usage much.

Regards,
-- 
Ilya Kasnacheev


пн, 1 окт. 2018 г. в 11:07, wt :

> So are you saying that i have to revert to the spring xml to get this done?
>
> I am using query entities when trying to do this but the order field
> doesn't
> seem to apply in the .net API.
>
>  foreach (var field in item.Columns)
> {
> properties.Add(new Property()
> {
> Name = field.ColumnName,
> Type = ChangeType(field.DataType), --data type
> Attributes = new List()
> { MakeIndexConfig(field)} -- here is where i set
> the
> fields and indexes
> });
>
> }
>
> I guess another option might be to use the rest service and issue create
> index statements.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Query Slow

2018-10-01 Thread Skollur
I tried with setting up field with @QuerySqlField (index = true) and still
slow.

Example : 
/** Value for customerId. */
@QuerySqlField (index = true)
private Long customerId



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Setting performance expectations

2018-10-01 Thread Daryl Stultz
It means ehCache is better than ignite performance atleast in LOCAL mode  :)


I fully expect ehCache, Guava Cache, etc to be faster than Ignite since Ignite 
has more to do to communicate with the grid. But the scale of the difference is 
the concern, Thus far my experiments do not show it's appreciably faster than 
reading from the database.


--

Daryl Stultz
Principal Software Developer
_
OpenTempo, Inc
http://www.opentempo.com
mailto:daryl.stu...@opentempo.com


Re: Catch ComputeJob failures on the offending Node.

2018-10-01 Thread Maxim.Pudov
You can listen to Ignite  events  
. I believe, the event type you are looking for is  EVT_JOB_TIMEOUT

  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ingite Join returns wrong records

2018-10-01 Thread Skollur
Is there setDistributedJoins at cache level?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: .NET ContinuousQuery lose cache entries

2018-10-01 Thread Alew

Hi!

Tried, but the issue still exists.

https://monosnap.com/file/UUbYX4RUyXPZyPxKx97hTwSUSvNtye


On 01/10/2018 13:06, Ilya Kasnacheev wrote:

Hello!

Setting BufferSize to 1 seems to fix your reproducer's problem:

newContinuousQuery(newCacheListener(skippedItems,doubleDelete),true){BufferSize=1} 



but the recommendation is to avoid doing anything synchronous from 
Continuous Query's body. Better offload any non-trivial processing to 
other threads operating asynchronously.


Regards,
--
Ilya Kasnacheev


сб, 29 сент. 2018 г. в 5:18, Alew >:


Hi, attached a reproducer.
Turn off logs to fix the issue. Slow log is not the only reason. More
nodes in a cluster lead to the same behaviour.
Who is responsible for the behavior? Is it .net, java, bad docs or me?

On 24/09/2018 20:03, Alew wrote:
> Hi!
>
> I need a way to consistently get all entries in a replicated
cache and
> then all updates for them while application is working.
>
> I use ContinuousQuery for it.
>
> var cursor = cache.QueryContinuous(new ContinuousQuery byte[]>(new CacheListener(), true),
>     new ScanQuery()).GetInitialQueryCursor();
>
> But I have some issues with it.
>
> Sometimes cursor returns only part of entries in a cache and cache
> listener does not return them either.
>
> Sometimes cursor and cache listener return the same entry both.
>
> Issue somehow related to amount of work the nodes have to do and
> amount of time between start of the publisher node and
subscriber node.
>
> There are more problems if nodes start at the same time.
>
> Is there a reliable way to do it without controling order of node
> start and pauses between them?
>
>





Re: Issues running Ignite with Cassandra and spark.

2018-10-01 Thread Shrey Garg
I fixed the issue with dataframe api and am getting all columns now.
However, I am not able to perform grouping + udaf operations as it tries to
perform these on ignite.
setting OPTION_DISABLE_SPARK_SQL_OPTIMIZATION = true is not helping.

How so we tell ignite to just fetch data and perform all other operations
in spark?


Re: .NET ContinuousQuery lose cache entries

2018-10-01 Thread Ilya Kasnacheev
Hello!

Setting BufferSize to 1 seems to fix your reproducer's problem:

new ContinuousQuery(new CacheListener(skippedItems,
doubleDelete), true) { BufferSize = 1}

but the recommendation is to avoid doing anything synchronous from
Continuous Query's body. Better offload any non-trivial processing to other
threads operating asynchronously.

Regards,
-- 
Ilya Kasnacheev


сб, 29 сент. 2018 г. в 5:18, Alew :

> Hi, attached a reproducer.
> Turn off logs to fix the issue. Slow log is not the only reason. More
> nodes in a cluster lead to the same behaviour.
> Who is responsible for the behavior? Is it .net, java, bad docs or me?
>
> On 24/09/2018 20:03, Alew wrote:
> > Hi!
> >
> > I need a way to consistently get all entries in a replicated cache and
> > then all updates for them while application is working.
> >
> > I use ContinuousQuery for it.
> >
> > var cursor = cache.QueryContinuous(new ContinuousQuery > byte[]>(new CacheListener(), true),
> > new ScanQuery()).GetInitialQueryCursor();
> >
> > But I have some issues with it.
> >
> > Sometimes cursor returns only part of entries in a cache and cache
> > listener does not return them either.
> >
> > Sometimes cursor and cache listener return the same entry both.
> >
> > Issue somehow related to amount of work the nodes have to do and
> > amount of time between start of the publisher node and subscriber node.
> >
> > There are more problems if nodes start at the same time.
> >
> > Is there a reliable way to do it without controling order of node
> > start and pauses between them?
> >
> >
>
>


Re: Issues running Ignite with Cassandra and spark.

2018-10-01 Thread Shrey Garg
Hi,
Thanks for the answer.
Unfortunately, we cannot remove Cassandra as it is being used elsewhere as
well. We will have to write directly in ignite and sync with cassandra.

We had a few other issues while getting data from spark:

1) cacherdd.sql("select * from table") is giving me heap memory (GC)
issues. However, getting data using spark.read.format() works fine. Why
is this so ?

2) in my persistence, i have IndexedTypes with key and value POJO classes.
The key class corresponds to the key in cassandra with partition and
clustering keys defined. While querying with sql, (select * from
value_class) i get all the columns of the table. However, while querying
using spark.read.format(...).option(OPTION_TABLE,value_class).load() , I
only get the columns stored in the value class. How do i fetch all the
columns using dataframe api ?

Thanks,
Shrey



On Fri, 28 Sep 2018, 08:43 Alexey Kuznetsov,  wrote:

> Hi,  Shrey!
>
> Just as idea - Ignite now has persistence (see
> https://apacheignite.readme.io/docs/distributed-persistent-store),
>  may be you can completely replace  Cassandra with Ignite?
>
> In this case all data always be actual, no need to sync with external db.
>
> --
> Alexey Kuznetsov
>


Re: Fetch column names in sql query results

2018-10-01 Thread Ilya Kasnacheev
Hello!

I don't think it is possible with SqlFieldsQuery even in Java.

I'm afraid the best course of action for you is just use JDBC API. I'm
pretty sure that .Net JDBC is available somewhere.

Regards,
-- 
Ilya Kasnacheev


пн, 1 окт. 2018 г. в 10:57, wt :

> hi
>
> I am on 2.6 using the .net api and there are is no metadata object on the
> result set even though this was said to have been added in 2.1. I could use
> the rest service but would prefer to use the api. can anyone advise if this
> is possible with SqlFieldsQuery?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite with POJO persistency in SQLServer

2018-10-01 Thread slava.koptilin
Hello,

I am sorry for the late response.
Yes, I would try to use BINARY.

Thanks,
S.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is ID generator split brain compliant?

2018-10-01 Thread abatra
I believe it won't work either. I will try it soon and share the result. 

Could you please share some pointers that I should look at for debugging
persistence? I have org.apache.ignite=DEBUG in logs and IGNITE_QUITE set to
false. There a lot of logs, most of which I can not make sense of w.r.t.
persistence.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite .net index group definition

2018-10-01 Thread wt
So are you saying that i have to revert to the spring xml to get this done?

I am using query entities when trying to do this but the order field doesn't
seem to apply in the .net API.

 foreach (var field in item.Columns)
{
properties.Add(new Property()
{
Name = field.ColumnName,
Type = ChangeType(field.DataType), --data type
Attributes = new List()
{ MakeIndexConfig(field)} -- here is where i set the
fields and indexes
});

}

I guess another option might be to use the rest service and issue create
index statements. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite .net programatically access cluster config (not working)

2018-10-01 Thread wt
thanks ill try the compute tasks route



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Fetch column names in sql query results

2018-10-01 Thread wt
hi

I am on 2.6 using the .net api and there are is no metadata object on the
result set even though this was said to have been added in 2.1. I could use
the rest service but would prefer to use the api. can anyone advise if this
is possible with SqlFieldsQuery?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Reading from the SQLServer persistence store with write-behind setting

2018-10-01 Thread michal23849
Hi Ilya,

I do have all the mappings configured for all my caches and they are written
successfully to SQLServer database using the writeBehind setting.

My question is - will these mapping automatically be used when I want to use
the readThrough functionality or do I have to specify some additional
mappings or implementation of any other classes? 

Can you give some examples of using the loadCache, loadAll, getAll?

Regards
Michal



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Problems are enabling Ignite Persistence

2018-10-01 Thread Mikael

Yes it works fine with just one node, I do it all the time for testing.

Mikael


Den 2018-10-01 kl. 09:06, skrev Ilya Kasnacheev:

Hello!

Due to https://apacheignite.readme.io/docs/baseline-topology your 
cluster should indeed auto-activate on a second run provided that all 
nodes which were in the original cluster are still available.
Note that it will only work if you keep persistence directory intact 
between runs.

I'm also not completely sure whether it will work with one node only.

Regards,
--
Ilya Kasnacheev


пт, 28 сент. 2018 г. в 17:33, Lokesh Sharma >:


Could you also set correct path for IGNITE_HOME or set work
directory in
Ignite configuration to exclude chances for your /tmp
directory to be wiped.


Yes, tried but had no luck.

On Fri, Sep 28, 2018 at 7:59 PM Lokesh Sharma
mailto:lokeshh.sha...@gmail.com>> wrote:

Hi

Thanks for looking into this.

First, I had to change h2 version dependency from 1.4.197
to 1.4.195 - maybe
this is because I have later snapshot of Apache Ignite
compiled.


The version is already 1.4.195.

Second, I had to activate the cluster! You have to
activate persistent
cluster after you bring all nodes up. Just add
ignite.active(true); on
IgniteConfig.java:92

After, I added "ignite.active(true)" it is working for me.
Although, when I remove this line on second run of the app,
the original problem persists. Shouldn't the cluster be
automatically activated in the second run as said in the
documentation?



On Fri, Sep 28, 2018 at 6:55 PM ilya.kasnacheev
mailto:ilya.kasnach...@gmail.com>>
wrote:

Hello!

Seems to be working for me after two changes.

First, I had to change h2 version dependency from 1.4.197
to 1.4.195 - maybe
this is because I have later snapshot of Apache Ignite
compiled.

Second, I had to activate the cluster! You have to
activate persistent
cluster after you bring all nodes up. Just add
ignite.active(true); on
IgniteConfig.java:92

The error message about activation requirement was clear
and I still could
exist app with Ctrl-C. Maybe you have extra problems if
you are on some
unstable commit of master branch.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





Re: java.lang.IllegalArgumentException: Can not set final

2018-10-01 Thread Ilya Kasnacheev
Hello!

I think you should avoid having final fields in classes sent over to nodes
for execution. I also think that you should mark _ignite field with
@IgniteInstanceResource (it might work anyway but this is more of Ignite
way)

Regards,
-- 
Ilya Kasnacheev


пт, 28 сент. 2018 г. в 21:09, smurphy :

> Hi Ilya,
>
> I don't think the errors are caused by the binary objects, but by having
> custom objects in the EntryProcessor. Checkout the attached
> FragmentAssignToScannerEntryProcessor.java. Instead of injecting the
> BinaryFragmentConverter into FragmentAssignToScannerEntryProcessor, I moved
> all the binary object code from BinaryFragmentConverter directly into
> FragmentAssignToScannerEntryProcessor.
> This works fine.
>
> Then I created the static FragmentAssignToScannerEntryProcessor.Tester
> class, which has no binary objects, and added it as a member variable of
> FragmentAssignToScannerEntryProcessor and that resulted in the same
> IllegalArgumentException as before: "failed to read field [name=tester]"
>
> FragmentAssignToScannerEntryProcessor.java
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1317/FragmentAssignToScannerEntryProcessor.java>
>
>
> stackTrace.txt
> 
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous queries with aggregations

2018-10-01 Thread Ilya Kasnacheev
Hello!

I am afraid you will have to implement dynamic aggregation yourself since
Continuouos Queries do not support any fashion of expressions, they just
return changed rows.

E.g. you can keep track of current min and max values for desired fields
and update these values if you have a new record (or no longer have an
existing one).

Regards,
-- 
Ilya Kasnacheev


чт, 27 сент. 2018 г. в 21:47, Francesco di Siena :

> We have an ignite system with a table of about 100 columns, and 300,000
> rows.
> Of this data around 10 columns are changing frequently, i.e. up to several
> times per second on most of the 300K rows.We are able to query this data (e
> .g aggregation and grouping) at a reasonable speed (<1s).
>
> We can also use a "continuous query" to listen to all changes to the raw
> data, however this only notifies the subscriber of changes to individual
> table rows, not the aggregated results.
> Is there any mechanism to allow a continuous aggregated query?
>
> e.g. Say my query was:
>  select max(price), min(size), name from Products where price>23 group
> by name
> e.g. I'd like the client to receive updates at the aggregated level
> whenever
> max(price) changes due to an underlying row causing this to change.
>
> One work-around is to notice when a changed price was the current max, and
> if so, re-query the aggregated data, but this is not efficient.
> Is there any such a mechanism, or any suggested work-around?
> thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ingite Join returns wrong records

2018-10-01 Thread Ilya Kasnacheev
Hello!

Of course, if Account_Type is reasonably well distributed you could just
make it the affinity key.

Regards,
-- 
Ilya Kasnacheev


сб, 29 сент. 2018 г. в 3:48, Andrey Mashenkov :

> Hi,
>
> Try to use qry.setDistributedJoins(true). This should always return
> correct result.
>
> However, it has poor performance as due to intensive data exchange between
> nodes.
>
> By default, Ignite join only the data available locally on each node. Try
> to collocate your data to get better performance with non distributed joins.
>
> сб, 29 сент. 2018 г., 3:39 Skollur :
>
>> I am using Apache Ignite 2.6 version. I have two tables as below i.e
>> SUMMARY
>> and SEQUENCE
>>
>> SUMMARY-> DW_Id bigint (Primary key) , Sumamry_Number varchar,
>> Account_Type
>> varchar
>> SEQUENCE-> DW_Id bigint (Primary key) , Account_Type varchar
>>
>> Database and cache has same number of records in both tables. Database
>> JOIN
>> query returns 1500 counts/records and However IGNITE JOIN returns only 4
>> counts and 4 records. Ignite cache is build based on auto generated web
>> console. Query used is as below. There is no key involved while joining
>> two
>> cache tables here from two cache.tables. This is simple join based on
>> value(i.e account type - string). How to get correct value for JOIN in
>> Ignite?
>>
>> SELECT COUNT(*) FROM SUMMARY LIQ
>> INNER JOIN SEQUENCE CPS ON
>> LIQ.Account_Type = CPS.Account_Type
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: ComplexObjects via REST API?

2018-10-01 Thread Ilya Kasnacheev
Hello!

Yes, into JSON.

Unfortunately I don't have any ready to use examples. This is the interface:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ConnectorMessageInterceptor.html
You need to put it into ConnectorConfiguration, which you should put into
IgniteConfiguration.

I also suggest that this should be taken to userlist instead of devlist.

Regards,
-- 
Ilya Kasnacheev


пн, 1 окт. 2018 г. в 8:46, Pavel Petroshenko :

> Thanks, Ilya.
>
> > REST API will convert complex BinaryObjects into REST by default
>
> You mean "into JSON"?
>
> Is this documented anywhere? Any examples that you can share a link to?
>
> P.
>
> On Sun, Sep 30, 2018 at 10:06 PM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Yes, this is supported. REST API will convert complex BinaryObjects into
>> REST by default. But to put such objects via REST you will need to define
>> your own ConnectorMessageInterceptor and plug it in. You will need to
>> define string to entity mapping in onReceive. You can leave onSend
>> returning arg.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 1 окт. 2018 г. в 3:51, Pavel Petroshenko :
>>
>> > Igniters,
>> >
>> > Is there a way to put user defined objects (in BinaryObject or whatever
>> > another format) via the REST APIs [1]? According to the docs it's not
>> > supported, but I wonder if by any chance it's implemented somewhere and
>> is
>> > just not officially released yet...
>> >
>> > Thanks,
>> > Pavel
>> >
>> > [1] https://apacheignite.readme.io/docs/rest-api
>> >
>>
>