Spark to Ignite Data load, Ignite node crashashing

2018-08-08 Thread ApacheUser
Hello Ignite team,

I a writing data from Spark Dataframe to Ignite, frequently one node goes
down, I dont see any error in log file below is the trace. If i restart it
doesn't join Cluster unless I stop the Spark job which is writing data to
Ignite Cluster.

I have 4 nodes with 4CPU/16GB RAM 200GB disc space, persistenc eis enabled,
What could be the reason?

[00:44:33]__  
[00:44:33]   /  _/ ___/ |/ /  _/_  __/ __/
[00:44:33]  _/ // (7 7// /  / / / _/
[00:44:33] /___/\___/_/|_/___/ /_/ /___/
[00:44:33]
[00:44:33] ver. 2.6.0#20180710-sha1:669feacc
[00:44:33] 2018 Copyright(C) Apache Software Foundation
[00:44:33]
[00:44:33] Ignite documentation: http://ignite.apache.org
[00:44:33]
[00:44:33] Quiet mode.
[00:44:33]   ^-- Logging to file
'/data/ignitedata/apache-ignite-fabric-2.6.0-bin/work/log/ignite-d90d68c6.0.log'
[00:44:33]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[00:44:33]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[00:44:33]
[00:44:33] OS: Linux 3.10.0-862.3.2.el7.x86_64 amd64
[00:44:33] VM information: Java(TM) SE Runtime Environment 1.8.0_171-b11
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.171-b11
[00:44:33] Configured plugins:
[00:44:33]   ^-- None
[00:44:33]
[00:44:33] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[00:44:33] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[00:44:33] Security status [authentication=off, tls/ssl=off]
[00:44:35] Nodes started on local machine require more than 20% of physical
RAM what can lead to significant slowdown due to swapping (please decrease
JVM heap size, data region size or checkpoint buffer size)
[required=13412MB, available=15885MB]
[00:44:35] Performance suggestions for grid  (fix if possible)
[00:44:35] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[00:44:35]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[00:44:35]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[00:44:35]   ^-- Speed up flushing of dirty pages by OS (alter
vm.dirty_expire_centisecs parameter by setting to 500)
[00:44:35]   ^-- Reduce pages swapping ratio (set vm.swappiness=10)
[00:44:35] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[00:44:35]
[00:44:35] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[00:44:35]
[00:44:35] Ignite node started OK (id=d90d68c6)
[00:44:35] >>> Ignite cluster is not active (limited functionality
available). Use control.(sh|bat) script or IgniteCluster interface to
activate.
[00:44:35] Topology snapshot [ver=4, servers=4, clients=0, CPUs=16,
offheap=40.0GB, heap=4.0GB]
[00:44:35]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=INACTIVE]
[00:44:35]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:44:35]   ^-- 1 nodes left for auto-activation
[a99529d8-e483-44b3-96eb-a5a773e380e3]
[00:44:35] Data Regions Configured:
[00:44:35]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:20] Topology snapshot [ver=5, servers=4, clients=1, CPUs=16,
offheap=50.0GB, heap=8.4GB]
[00:48:20]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:20]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:20] Data Regions Configured:
[00:48:20]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:37] Topology snapshot [ver=6, servers=4, clients=2, CPUs=16,
offheap=60.0GB, heap=12.0GB]
[00:48:37]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:37]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:37] Data Regions Configured:
[00:48:37]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:37] Topology snapshot [ver=7, servers=4, clients=3, CPUs=16,
offheap=70.0GB, heap=16.0GB]
[00:48:37]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:37]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:37] Data Regions Configured:
[00:48:37]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:38] Topology snapshot [ver=8, servers=4, clients=4, CPUs=16,
offheap=80.0GB, heap=19.0GB]
[00:48:38]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:38]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:38] Data Regions Configured:
[00:48:38]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:40] Topology snapshot [ver=9, servers=4, clients=5, CPUs=16,
offheap=90.0GB, heap=23.0GB]
[00:48:40]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,

Re: Distributed closure with buffer of binary data

2018-08-08 Thread F.D.
Ok, but I think it's the same like WriteArray.

For the moment I solved in a different way,using a encode/decode functions.

Thanks,

On Wed, Aug 8, 2018 at 11:06 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> How about WriteInt8Array
> 
> ()?
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-08 11:19 GMT+03:00 F.D. :
>
>> Hello Igniters,
>>
>> My distributed closures work perfectly when the inputs are strings, but
>> when I try to pass a buffer of bytes I got an error.
>>
>> The buffer of bytes arrives to me in a std::string but when I to
>> use BinaryWriter::WriteString the string is truncated (ok, it was
>> predictable). The question is there a method of BinaryWriter/BinaryReader
>> to handles with buffer of char? (I found WriteArray, but I've to pass char
>> by char).
>>
>> Thanks,
>> F.D.
>>
>
>


Re: two data region with two nodes

2018-08-08 Thread wangsan
tks,
I will try the oldest node select method to ensure only one node to process
the event message.

As mentioned earlier, in my project ,I have several modules。eg:
module A , daemon node with cache nodecache,machinecache .all the cache
persistent。
module B,  search node with cache featureCache,all the caches are
persistent。
module C, processor node with cache hotCache, non-persitence
module D, algo node,via C++ ,with cache algoCache ,non-persitence
more...

B,C,D's cache is private, via cahcenodefilter to guarantee isolation。
A’s cache is public,all the moudles(A,B,C,D...) can access the
nodecache,machinecache .but still use node filter to ensure persistent
data(such as wal,archive) only in module A‘s nodes.  nodecahce just like
cluster.nodes .I just wan't to keep them be persistent.

my topology is:
B1 B2  C1 D1
\   ||/  access nodeCache which store in node A1,A2
A1 A2













--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-08 Thread michal23849
Hi All,

I tried mapping the fields in number of different combinations based on the
above, but all the time I am failing with the SQLServerException:  The
conversion from UNKNOWN to UNKNOWN is unsupported.

The mappings I used in the following structure included: 


  
  
  
  
  
  
 

I also checked other combinations of:
javaFieldTypes:
my.package.ListingCode
byte[]
java.lang.Byte[]
java.sql.Blob
Object

to JdbcTypes (java.sql.Types.):
LONGVARBINARY
VARBINARY

Based on the SQLServer JDBC driver documentation and Ignite's all this
should be supported. Could you please shed some more live how the object is
passed to the driver and how best it should be mapped in XML?

Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SYSTEM_WORKER_TERMINATION (Item Not found)

2018-08-08 Thread dkarachentsev
Hi,

I'm not sure that nightly builds are updates regularly, but you should a
try. The biggest impact that nightly build could have some bugs that will be
fixed on release.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with POJO persistency in SQLServer

2018-08-08 Thread michal23849
Thank you for the help!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


continous query remote filter issue

2018-08-08 Thread Som Som
hello.

It looks like peerAssemblyLoadingMode flag doesn’t work correctly in case
of CacheEntryEventFilter:



As an example:



1)  This code works fine and I see “Hello world” on the server console. It
means that HelloAction class was successfully transferred to server.



class Program

{

static void Main(string[] args)

{

using (var ignite = Ignition
.StartFromApplicationConfiguration())

{

var remotes = ignite.GetCluster().ForRemotes();

remotes.GetCompute().Broadcast(newHelloAction());

}

}



class HelloAction : IComputeAction

{

public void Invoke()

{

Console.WriteLine("Hello, World!");

}

}

}

2)  But this code that sends the filter class to the remote server node
generates an error and I receive 4 entries of Employee instead of 2 as
expected:

class Program

{

public class Employee

{

public Employee(string name, long salary)

{

Name = name;

Salary = salary;

}



[QuerySqlField]

public string Name { get; set; }



[QuerySqlField]

public long Salary { get; set; }



public override string ToString()

{

return string.Format("{0} [name={1}, salary={2}]", typeof(
Employee).Name, Name, Salary);

}

}



class EmployeeEventListener :ICacheEntryEventListener

{

public void OnEvent(IEnumerable>
evts)

{

foreach(var evt in evts)

Console.WriteLine(evt.Value);

}

}



class EmployeeEventFilter :ICacheEntryEventFilter

{

public bool Evaluate(ICacheEntryEvent evt)

{

return evt.Value.Salary > 5000;

}

}



static void Main(string[] args)

{

using (var ignite = Ignition
.StartFromApplicationConfiguration())

{

var employeeCache = ignite.GetOrCreateCache(

new CacheConfiguration("employee", newQueryEntity(typeof
(int), typeof(Employee))) { SqlSchema = "PUBLIC" });







var query = new ContinuousQuery(new
EmployeeEventListener())

{

Filter = new EmployeeEventFilter()

};



var queryHandle = employeeCache.QueryContinuous(query);



employeeCache.Put(1, newEmployee("James Wilson", 1000));

employeeCache.Put(2, new Employee("Daniel Adams", 2000));

employeeCache.Put(3, newEmployee("Cristian Moss", 7000));

employeeCache.Put(4, newEmployee("Allison Mathis", 8000));



Console.WriteLine("Press any key...");

Console.ReadKey();

}

}

}

Server node console output:

[16:26:33]__  

[16:26:33]   /  _/ ___/ |/ /  _/_  __/ __/

[16:26:33]  _/ // (7 7// /  / / / _/

[16:26:33] /___/\___/_/|_/___/ /_/ /___/

[16:26:33]

[16:26:33] ver. 2.7.0.20180721#19700101-sha1:DEV

[16:26:33] 2018 Copyright(C) Apache Software Foundation

[16:26:33]

[16:26:33] Ignite documentation: http://ignite.apache.org

[16:26:33]

[16:26:33] Quiet mode.

[16:26:33]   ^-- Logging to file
'C:\Ignite\apache-ignite-fabric-2.7.0.20180721-bin\work\log\ignite-b1061a07.0.log'

[16:26:33]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'

[16:26:33]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}

[16:26:33]

[16:26:33] OS: Windows Server 2016 10.0 amd64

[16:26:33] VM information: Java(TM) SE Runtime Environment 1.8.0_161-b12
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.161-b12

[16:26:33] Please set system property '-Djava.net.preferIPv4Stack=true' to
avoid possible problems in mixed environments.

[16:26:33] Configured plugins:

[16:26:33]   ^-- None

[16:26:33]

[16:26:33] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]

[16:26:33] Message queue limit is set to 0 which may lead to potential
OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due
to message queues growth on sender and receiver sides.

[16:26:33] Security status [authentication=off, tls/ssl=off]

[16:26:35] Performance suggestions for grid  (fix if possible)

[16:26:35] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true

[16:26:35]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)

[16:26:35]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]'
to JVM options)

[16:26:35]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)

[16:26:35]   ^-- Disable processing of calls to 

Re: Ignite with POJO persistency in SQLServer

2018-08-08 Thread aealexsandrov
Yes. In case if you don't want to store it as objects then you can move this
fields to original object:

class a{
Int a;
class b;
}

class b{
int b;
int c;
}

You can change it as next:

class a{
int a,
int b;
int c;
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partitions distribution across nodes

2018-08-08 Thread akash shinde
Hi,

I introduced the delay of 5 seconds and it worked.

1) What is exchange process and how to identify whether exchange process is
finished?

2) I am doing partition key aware data loading and I want to start load
process from server node only and not from the client node. I just want to
initiate my load process only after all the configured nodes are up and
running.
For that I am using Distributed count down latch. Each node when started
reduces the count on LifecycleEventType.AFTER_NODE_START event. When this
count down latch count becomes zero, cache.loadCache() method is invoked
and this method is always executed from the node which joins the cluster
last.
Is there any better way to achieve this?

3) I also want to make sure that if any other node joins the cluster after
the data loading process is comeplete, cache.loadCache method is not
invoked and the data is made available to this node using re-balancing
process.
I am thinking to use some variable which will tell the cache loading is
complete. Does ignite have any builtin feature to achieve this?


Code is as shown below to get the ignite partitions.

private List getPrimaryParitionIdsLocalToNode() {
  Affinity affinity = igniteSpringBean.affinity(cacheName);
  ClusterNode locNode = igniteSpringBean.cluster().localNode();
  List primaryPartitionIds =
Arrays.stream(affinity.primaryPartitions(locNode)).boxed()
  .collect(Collectors.toList());
  LOGGER.info("Primary Partition Ids for Node {} are {}",
locNode.id(), primaryPartitionIds);
  LOGGER.info("Number of Primary Partition Ids for Node {} are {}",
locNode.id(), primaryPartitionIds.size());
  return primaryPartitionIds;
}

private List getBackupParitionIdsLocalToNode() {
  Affinity affinity = igniteSpringBean.affinity(cacheName);
  ClusterNode locNode = igniteSpringBean.cluster().localNode();
  List backPartitionIds =
Arrays.stream(affinity.backupPartitions(locNode)).boxed()
  .collect(Collectors.toList());
  LOGGER.info("Backup Partition Ids for Node {} are {}", locNode.id(),
backPartitionIds);
  LOGGER.info("Number of Backup Partition Ids for Node {} are {}",
locNode.id(), backPartitionIds.size());
  return backPartitionIds;
}


Thanks,
Akash


On Wed, Aug 8, 2018 at 1:16 PM dkarachentsev 
wrote:

> Hi Akash,
>
> How do you measure partition distribution? Can you provide code for that
> test? I can assume that you get partitions before exchange process if
> finished. Try to use delay in 5 sec after all nodes are started and check
> again.
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: what are the tehcniques to automatically detect changes in Multiple DBs and automatically push them into Ignite Cache

2018-08-08 Thread Deepa Kolwalkar
Thanks Juan ..  I will check on the same




From:   "Juan Rodríguez Hortalá" 
To: user@ignite.apache.org
Date:   07-08-2018 08:44
Subject:Re: what are the tehcniques to automatically detect 
changes in Multiple DBs and automatically push them into Ignite Cache



That looks like something you could do with Kafka connect 
https://www.confluent.io/product/connectors/ using the jdbc source and the 
ignite sink. Just my 2 cents 

On Mon, Aug 6, 2018, 07:19 Deepa Kolwalkar  
wrote:
Thanks Prasad for your suggestions. 

The Legacy sytems are on different platforms and some of them are products 
.. so there is no way of implementing any custom logic in such products to 
send update messages. The Legacy Systems remain a black-box to us with 
only its DB which is accessible for viewing.. 

Regards 





From:"Prasad Bhalerao"  
To:user@ignite.apache.org 
Date:06-08-2018 16:51 
Subject:Re: what are the tehcniques to automatically detect 
changes in Multiple DBs and automatically push them into Ignite Cache 



Can this back office legacy system send you a DB update message or can you 
make this back office system to send you DB update message? 

If yes then you can have the Id/primary key, DB operation and table name 
in this DB update message. 

In your application you use this information to refresh your cache using 
read through mechanism. 

Thanks, 
Prasad 

On Mon, Aug 6, 2018, 3:02 PM Deepa Kolwalkar  
wrote: 
Thanks Denis. 

But as I mentioned in earlier mail, the Caches are meant to be Read-Only 
(only to be used by Microservices for fetching data).   
The Databases are updated by backoffice legacy systems.  Hence we cannot 
do a Write-through to the DBs via the CacheStore API. 

If anyone has used the Gridgain GoldenGate Adapter, then we would be glad 
to hear about any challenges/short-comings if any. 

Regards 



From:Denis Mekhanikov  
To:user@ignite.apache.org 
Date:06-08-2018 13:18 
Subject:Re: what are the tehcniques to automatically detect 
changes in Multiple DBs and automatically push them into Ignite Cache 



There is no such feature in Ignite. 
If you know, how to subscribe for events in the external database, then 
you can implement this logic yourself. 
You just need to perform put into Ignite cache for every insert into the 
external DB. 

But the recommended way to do it is to perform writing on Ignite. 
Cache store with write-through enabled will take care of writing the data 
into the external DB. 

Denis 

вс, 5 авг. 2018 г. в 17:32, Deepa Kolwalkar : 
We have a requirement where Changes to data from Multiple DBs need to be 
periodically & automatically Pushed (not sure how) into various Ignite 
Caches 
Once the Data is available in the Ignite Caches, it will be persisted 
using Ignite Native Persistence, so that in the event of a crash, the Data 
can be loaded from Native Persistence. 
The Caches will be used in read-only manner by Clients (Microservices) .   


What is the Best technique for having Changes to data from Multiple DBs to 
be automatically put into the Ignite Caches ? 

While searching for this solution i came across this link : 
http://apache-ignite-users.70518.x6.nabble.com/Any-references-Syncing-Ignite-and-Oracle-DB-with-Oracle-GoldenGate-updates-from-DB-to-ignite-td20715.html
 

which suggests the following : 
== 
Ignite does not provide such integration out of the box, however there 
a commercial offering from GridGain for that: 
https://docs.gridgain.com/docs/goldengate-replication 
== 

Was wondering whether we still need to use GoldenGate for such 
replications OR whether newer versions of Ignite are now supporting such 
asynchronous sync-ups with underlying DB changes 

Thanks 
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you 



Re: Getting an exception when listing partitions of IgniteDataFrame

2018-08-08 Thread Ramakrishna Nalam
Hi Ray,

I could not find a solution to the problem.

I moved away from Ignite for now, so did not dig into it further.


Regards,
Rama.


On Wed, Aug 8, 2018 at 2:55 PM Ray  wrote:

> Hi Rama,
>
> Did you solve this problem?
> Please let me know your solution if you have solved this problem.
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite with POJO persistency in SQLServer

2018-08-08 Thread michal23849
Andrei,

As I understand you - the only way to map embedded classes is by using
mapping them as objects and store as BLOBs or other VARBINARY fields in SQL
database?

No way to decompose them into separate fields in the tables?

Eg. ListingCode has:
private String code;
private String codeType;

Please confirm if the BLOB is only way or there is any way to get the
embedded object's fields and store them in separate columns.

I understand same goes for the array - only store it in BLOB or redesign the
domain object model, right?

Thank you
Michal





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting an exception when listing partitions of IgniteDataFrame

2018-08-08 Thread Ray
Hi Rama,

Did you solve this problem?
Please let me know your solution if you have solved this problem.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Optimum persistent SQL storage and querying strategy

2018-08-08 Thread Pavel Kovalenko
Hello Jose,

Did you consider Mongo DB for your use case?

2018-08-08 10:13 GMT+03:00 joseheitor :

> Hi Ignite Team,
>
> Any tips and recommendations...?
>
> Jose
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: MySQL cache load causes java.sql.SQLException: GC overhead limit exceeded

2018-08-08 Thread Ilya Kasnacheev
Hello!

We will be glad to hear if you will gather more details about the issue.

Regards,

-- 
Ilya Kasnacheev

2018-08-07 22:35 GMT+03:00 Orel Weinstock (ExposeBox) :

> I've used the web-console generated LoadCaches file. From what I
> understand, looking at the source code, this is not supposed to keep it
> on-heap at all (and I've supplied ample off-heap space).
>
> It "just worked" with a HUGE memory allocation, but I will optimize it
> later.
>
> On 7 August 2018 at 17:16, Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> This is likely caused by trying to keep all the table data in memory
>> during data load. Can you share your code so that we could take a look?
>>
>> Regards,
>>
>>
>> --
>> Ilya Kasnacheev
>>
>> 2018-08-06 18:00 GMT+03:00 Orel Weinstock (ExposeBox) > >:
>>
>>> Hi all,
>>>
>>> Changing the MAIN_CLASS env variable and tweaking the default heap size
>>> (2.7GB) and default data region size (8GB), I'm trying to load a small
>>> (<4GB) MySQL table into the cache and get a GC overhead limit error. Should
>>> I increase memory? Is there a configuration I'm missing?
>>>
>>> Thanks,
>>> --
>>>
>>> --
>>> *Orel Weinstock*
>>> Software Engineer
>>> Email:o...@exposebox.com 
>>> Website: www.exposebox.com
>>>
>>>
>>
>
>
> --
>
> --
> *Orel Weinstock*
> Software Engineer
> Email:o...@exposebox.com 
> Website: www.exposebox.com
>
>


Re: Distributed closure with buffer of binary data

2018-08-08 Thread Ilya Kasnacheev
Hello!

How about WriteInt8Array

()?

Regards,

-- 
Ilya Kasnacheev

2018-08-08 11:19 GMT+03:00 F.D. :

> Hello Igniters,
>
> My distributed closures work perfectly when the inputs are strings, but
> when I try to pass a buffer of bytes I got an error.
>
> The buffer of bytes arrives to me in a std::string but when I to
> use BinaryWriter::WriteString the string is truncated (ok, it was
> predictable). The question is there a method of BinaryWriter/BinaryReader
> to handles with buffer of char? (I found WriteArray, but I've to pass char
> by char).
>
> Thanks,
> F.D.
>


Distributed closure with buffer of binary data

2018-08-08 Thread F.D.
Hello Igniters,

My distributed closures work perfectly when the inputs are strings, but
when I try to pass a buffer of bytes I got an error.

The buffer of bytes arrives to me in a std::string but when I to
use BinaryWriter::WriteString the string is truncated (ok, it was
predictable). The question is there a method of BinaryWriter/BinaryReader
to handles with buffer of char? (I found WriteArray, but I've to pass char
by char).

Thanks,
F.D.


Re: Partitions distribution across nodes

2018-08-08 Thread dkarachentsev
Hi Akash,

How do you measure partition distribution? Can you provide code for that
test? I can assume that you get partitions before exchange process if
finished. Try to use delay in 5 sec after all nodes are started and check
again.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Optimum persistent SQL storage and querying strategy

2018-08-08 Thread joseheitor
Hi Ignite Team,

Any tips and recommendations...?

Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question

2018-08-08 Thread dkarachentsev
Hi,

It defines by AffinityFunction [1]. By default 1024 partitions, affinity
automatically calculates nodes that will keep required partitions and
minifies rebalancing when topology changes (nodes arrive or quit).

[1]https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction.html

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/