This has been resolved Val. Thanks
On 26 October 2016 at 14:58, vdpyatkov wrote:
> Hi Anil,
>
> I doubt, about this fields can serialize correctly:
>
> private Scan scan;
> private QueryPlan queryPlan;
>
> You need will get rid of this fields from serialized object.
>
>
>
> --
> View this messag
Hi Anil,
I doubt, about this fields can serialize correctly:
private Scan scan;
private QueryPlan queryPlan;
You need will get rid of this fields from serialized object.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Loading-Hbase-data-into-Ignite-tp8209p8502
Thanks val.
can you elaborate "You need to implement Callable as easy as possible (in
additional you can try implement Externalizable) and to create connection
into IgniteCallable directly."
ignite.compute().call() accepts IgniteCallable instances only.
i tried attached classes and no luck. Look
Hi Anil,
The implementation of IgniteCallable looks like very doubtful.
When you invoke "ignite.compute().call(calls)" all IgniteCallable will be
serialized and
sended to particular nodes on executing.
I have doubt about, QueryPlan serialized correctly.
You need to implement Callable as easy as
Hi Val,
I have attached the sample program. please take a look and let me know if
you have any questions.
after spending some time, i noticed that the exception is happening only
when processing of number of parallel callable's with broadcast.
Thanks,
Anil
On 15 October 2016 at 04:33, vkulichen
Hi Anil,
Yes, the exception doesn't tell much. It would be great if you provide a
test that reproduces the issue.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Loading-Hbase-data-into-Ignite-tp8209p8308.html
Sent from the Apache Ignite Users mailing list
HI,
when i am reading hbase information using Broadcast , i see the following
exception
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.ref
Thank you.
On 12 October 2016 at 15:56, Taras Ledkov wrote:
> Hi,
>
> FailoverSpi is used to process jobs failures.
>
> The AlwaysFailoverSpi implementation is used by default. One tries to
> submit a job the 'maximumFailoverAttempts' (default 5) times .
> On 12.10.2016 13:09, Anil wrote:
>
>
Hi,
FailoverSpi is used to process jobs failures.
The AlwaysFailoverSpi implementation is used by default. One tries to
submit a job the 'maximumFailoverAttempts' (default 5) times .
On 12.10.2016 13:09, Anil wrote:
HI,
Following is the approach to load hbase data into Ingnite
1. Create Cl
HI,
Following is the approach to load hbase data into Ingnite
1. Create Cluster wide singleton distributed custom service
2. Get all region(s) information in the init() method of your custom service
3. Broadcast region(s) using ignite.compute().call() in execute() method of
your custom service
4.
Thank you Vladislav and Andrey. I will look at the document and give a try.
Thanks again.
On 11 October 2016 at 20:47, Andrey Gura wrote:
> Hi,
>
> HBase regions doesn't map to Ignite nodes due to architectural
> differences. Each HBase region contains rows in some range of keys that
> sorted l
HI Alexey,
We are planning to have 4 node cluster. we will increase the number of
nodes based on performance.
key is string which unique (some part of hbase record primary key which is
unique). Each record has around 25-30 fields but that is small only. Record
wont have much content.
All 18 M re
Hi,
HBase regions doesn't map to Ignite nodes due to architectural differences.
Each HBase region contains rows in some range of keys that sorted
lexicographically while distribution of keys in Ignite depends on affinity
function and key hash code. Also how do you remap region to nodes in case
of
Hi,
The easiest way do this using DataStrimer[1] from all server nodes, but
with specific part of data.
You can do it using Ignite compute[2] (matching by node id for example or
node parameter or any other) with part number as a parameter to SQL query
for HDB.
[1]: http://apacheignite.gridgain.or
Hi, Anil.
It depends on your use case.
How many nodes will be in your cluster?
All 18M records will be in one cache or many caches?
How big single record? What will be the key?
You need only load or you also need write changed / new objects in cache to
HBase?
On Tue, Oct 11, 2016 at 8:11 PM, Anil
15 matches
Mail list logo