Good afternoon,


Following Mandhir suggestion, please find below some of the first questions we 
have with respect to Ignite (and h2 integration).





We have an in-house database with a number of domain specific optimizations. So 
far this product is single process and does not have any resilience features. 
We are interested in capitalizing on your experience with scaling H2 out to put 
our existing solution on Ignite. We don't think it will be possible to adapt H2 
for our needs because the data is stored differently (row-based vs 
column-based) and many of our optimizations rely on the format of the data.


*         Given that you have already integrated H2 inside of Ignite, is there 
a framework within the SQL Grid which we could reuse to replace H2 with a java 
database with very different internals ?

*         If we can reuse the framework, can we integrate our own DB on top of 
Ignite or would we need to fork ignite?

*         How does H2 store its data in Ignite, esp in the case of partitioning?

*         Our DB relies very much on columns being stored as a single object 
with specific encodings such as dictionary encoding (see 
https://github.com/Parquet/parquet-format/blob/master/Encodings.md#dictionary-encoding-plain_dictionary--2).
 Given that these are very large objects, will this play well with replication 
and serialization? Is there an issue if we store a huge entry in the map, where 
the value is several giga size ? what is the max size recommended ?

*         Is H2 used only in memory mode ?

*         Is it possible to create a contextual object on a node, in the 
indexing layer for example, and be able to directly point to the node from the 
client side?
That means that we need the client to talk directly to a node once the node is 
allocated to him.

*         It is possible to customize the load balancing of the nodes?

*         Is it possible to add some code in the node and have the nodes know 
each other inside that code?

*         Which kind of off heap do you use? Mapped memory file?

*         How much do u evaluate the performance degradation of the join 
requests on partitioned data ? And how better can u target to enhance it?

*         We will need to cache in every node huge volume of data. How much do 
u think this is acceptable in regards of what u already cache ?
Do you think this can cause an issue?

Thank you very much in advance for your assistance on this,
Best regards,
Fady

*******************************

This e-mail contains information for the intended recipient only. It may 
contain proprietary material or confidential information. If you are not the 
intended recipient you are not authorised to distribute, copy or use this 
e-mail or any attachment to it. Murex cannot guarantee that it is virus free 
and accepts no responsibility for any loss or damage arising from its use. If 
you have received this e-mail in error please notify immediately the sender and 
delete the original email received, any attachments and all copies from your 
system.

Reply via email to