HI Alexey, We are planning to have 4 node cluster. we will increase the number of nodes based on performance.
key is string which unique (some part of hbase record primary key which is unique). Each record has around 25-30 fields but that is small only. Record wont have much content. All 18 M records are related to one use case only.. so planning to keep in single cache so that pagination , filter and sorting supported at cache level itself. Initial load will be just write to cache and changes (or new objects) to existing cache will be added/updated using kafka stream. Thanks. On 11 October 2016 at 19:03, Alexey Kuznetsov <[email protected]> wrote: > Hi, Anil. > > It depends on your use case. > How many nodes will be in your cluster? > All 18M records will be in one cache or many caches? > How big single record? What will be the key? > You need only load or you also need write changed / new objects in cache > to HBase? > > On Tue, Oct 11, 2016 at 8:11 PM, Anil <[email protected]> wrote: > >> HI, >> >> we have around 18 M records in hbase which needs to be loaded into ignite >> cluster. >> >> i was looking at >> >> http://apacheignite.gridgain.org/v1.7/docs/data-loading >> >> https://github.com/apache/ignite/tree/master/examples >> >> is there any approach where each ignite node loads the data of one hbase >> region ? >> >> Do you have any recommendations ? >> >> Thanks. >> > > > > -- > Alexey Kuznetsov >
