Thank you for the advice. I also found the limitations section i did not 
see earlier. Ok i have come in to 6 6u dell server machines. I will use one 
to create a system that will relegate commands to the others. 1 master unit 
coordinating 5 slaves. And i have at least 20T storage space combined by 
various raided drives. Each server will have only enough space to carry its 
workload. The rest of my drives will be raided in a system that will be 
used as network storage for all systems. This system will use MASSIVE 
databases of permutations to build an encryption system that will truly 
have every possible out come possible of every cipher i know. Now This is 
really an experiment in learning servers but also using the means i have to 
properly accomplish what i want to accomplish. For instance a 16 byte 
rail/column cipher only creates 2 of the possible 20922789888000 by 
supplanting rail column infrastructure i can make from any 16 chars 
20922789888000 different ciphertexts. And that is only 1 cipher i end the 
end will use more than 20. Each being build upon by permutes to maximize 
their functionality. What i wanted to know is this a feasible solution to 
my need. Now that said the end project will not need to contain the full 
databases. I will sell one encryption solution and by basing the outcome of 
the ciphers on db's of permutations that i can change the index of the 
record i can sell the same software multiple times and no sold compiled 
unit will create the same cipher text with the same given key. In the end i 
need to only include in the compiled app a VERY small subset of the total 
EVERY possible but first i need it all. then as i sell units i can pick say 
10-100k records to put as an EMBEDED db that will allow that unit to make 
enough keys that the company could never possible run out all i have to to 
is document what i have used b4 and never use exactly what i did use 
before. then i can sell this over and over with 1 set of code able to be 
implemented over and over. So my real question this is evidently not 
realistic for my main need and i will look further. But from what i have 
read this solution has worked for a project that held 2 billion records 
embedded and that is about exactly what i want to give a sold unit. So is 
this a good solution for an embedded db containing 2b records. i questioned 
the top of my needs first to get feed back of my large problem, now i ask 
about my small problem.  Ty for the response and insight and further 
advice. 
On Monday, January 23, 2017 at 2:51:25 PM UTC-6, Christian MICHON wrote:
>
> Let's say best case you would have enough disk space and you could achieve 
> 10k inserts per second continuously in embedded mode.
>
> Then please calculate how many years would be needed to initiate your 
> database...
>
> Christian 
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to