We are going a bit off topic, but I use arrays to hold arrays (!) of about 3000 float objects. I have never done the test, but I assume that holding this in conventional relational tables would have a negative impact on performance. Retrieval performance is vital for my application. Plus memory starts to become an issue - do I want to store all those other indices for each float value?
Maybe I am making the normal mistake of assuming bottlenecks before doing benchmarks (in my experience, my initial guesstimate of bottlenecks is ALWAYS incorrect). Use of arrays in relational databases has been around since Ingres in the mid 80's, they had some very nice user defined objects way back then, and some ideas on how to extend the query language (I think you could define your own extensions). Postgres continues to lead in the area. Such a shame Ingres fell by the wayside. The grandfathers of relational databases (http://en.wikipedia.org/wiki/ Michael_Stonebraker) were behind these concepts, they wrote a manifesto for 3rd (or 4th?) generation relational databases. Aha, found it, worth reading by anybody interested in databases, a classic paper. http://www.cl.cam.ac.uk/teaching/2003/DBaseThy/or-manifesto.pdf On May 27, 12:43 pm, Rami Ojares <[email protected]> wrote: > Indeed. There's a lot to wonder in this world. > I have been always wondering why H2 supports array data structure at all > that does not nicely fit into the relational model. > > - Rami > > On 26.5.2012 23:44, essence wrote:> OK, thanks for clarifying, the problem is > indeed disc access, and the > > longer the row the fewer rows are on a page, I guess, so effectively > > long arrays will slow down deletion because more page reads need to be > > performed. > > > I have only pursued this as a counterpart to all the advice about IN. > > Getting rid of IN does not always help! > > > Interesting that you keep the array data on the same page as the rest > > of the row, I assume? I thought some databases put this stuff (and > > blobs etc.) somewhere different, maybe for this same reason. > > > On May 21, 6:36 pm, Thomas Mueller<[email protected]> > > wrote: > >> Hi, > > >> the bottleneck is that when deleting a row, > > >>> the complete row is read, and if that row contains a column with a > >>> large array, the reading of the array is the bottleneck (even though > >>> it is about to be deleted). > >> Yes. However I don't think converting the byte array to a row is the > >> problem. I guess the problem is that the page is read (from disk). This is > >> unavoidable unless if you delete all rows within a page. > > >> What you could do is delete all rows of a table, using "truncate" or "drop > >> table". > > >> Regards, > >> Thomas -- You received this message because you are subscribed to the Google Groups "H2 Database" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.
