Shit, I wrote a response, but lost it.

Date's manifesto is a lot less entertaining, with fewer bon mots, than
Stonebraker's, so I am staying with Stonebraker.

I have always loved the phrase 'they simply need educating'. So true.

On May 28, 8:04 am, Rami Ojares <[email protected]> wrote:
> Definately off-topic but who's counting?
> I suggest The Third Manifesto by Chris Date and Hugh 
> Darwen.http://www.thethirdmanifesto.com/
>
> The simple reason why arrays do not fit in the relational model is
> that in relational model values are stored as attributes (column) of a
> relation (table) in tuples (row).
>
> The attribute values are supposed to be atomic.
> They can be arbitrarily complex but that complexity should be opaque
> to the relational query engine.
>
> Type specific accessor methods can operate on the internals of the type.
>
> Note: This is not a deficiency. An array can be represented in a
> relational model
> as a relation with 2 attributes.
>
> CREATE TABLE ARRAY_1(position INT PRIMARY KEY, value varchar(or whatever))
>
> The user needs to manage the position if he wants to keep indexing
> contiguous and starting from 0 or 1.
> He could also calculate the position at query time.
>
> One issue that complicates the ARRAY type is that one needs to define
> the type of array elements (subtype?)
> Copying from java generics: ARRAY<INT>(maxlength)
> Then one would need to define multiple accessor methods to arrays so
> that arrays could be used
> as first class citizen in queries.
>
> Gotta go, but tell me why cant you model your array data into a normal
> table using the conventional types?
> What is the feature you need from an array?
>
> - Rami
>
> 28.5.2012 8:30, essence kirjoitti:
>
>
>
>
>
>
>
> > We are going a bit off topic, but I use arrays to hold arrays (!) of
> > about 3000 float objects. I have never done the test, but I assume
> > that holding this in conventional relational tables would have a
> > negative impact on performance. Retrieval performance is vital for my
> > application. Plus memory starts to become an issue - do I want to
> > store all those other indices for each float value?
>
> > Maybe I am making the normal mistake of assuming bottlenecks before
> > doing benchmarks (in my experience, my initial guesstimate of
> > bottlenecks is ALWAYS incorrect).
>
> > Use of arrays in relational databases has been around since Ingres in
> > the mid 80's, they had some very nice user defined objects way back
> > then, and some ideas on how to extend the query language (I think you
> > could define your own extensions). Postgres continues to lead in the
> > area. Such a shame Ingres fell by the wayside. The grandfathers of
> > relational databases (http://en.wikipedia.org/wiki/
> > Michael_Stonebraker) were behind these concepts, they wrote a
> > manifesto for 3rd (or 4th?) generation relational databases. Aha,
> > found it, worth reading by anybody interested in databases, a classic
> > paper.
>
> >http://www.cl.cam.ac.uk/teaching/2003/DBaseThy/or-manifesto.pdf
>
> > On May 27, 12:43 pm, Rami Ojares<[email protected]>  wrote:
> >> Indeed. There's a lot to wonder in this world.
> >> I have been always wondering why H2 supports array data structure at all
> >> that does not nicely fit into the relational model.
>
> >> - Rami
>
> >> On 26.5.2012 23:44, essence wrote:>  OK, thanks for clarifying, the 
> >> problem is indeed disc access, and the
> >>> longer the row the fewer rows are on a page, I guess, so effectively
> >>> long arrays will slow down deletion because more page reads need to be
> >>> performed.
> >>> I have only pursued this as a counterpart to all the advice about IN.
> >>> Getting rid of IN does not always help!
> >>> Interesting that you keep the array data on the same page as the rest
> >>> of the row, I assume? I thought some databases put this stuff (and
> >>> blobs etc.) somewhere different, maybe for this same reason.
> >>> On May 21, 6:36 pm, Thomas Mueller<[email protected]>
> >>> wrote:
> >>>> Hi,
> >>>> the bottleneck is that when deleting a row,
> >>>>> the complete row is read, and if that row contains a column with a
> >>>>> large array, the reading of the array is the bottleneck (even though
> >>>>> it is about to be deleted).
> >>>> Yes. However I don't think converting the byte array to a row is the
> >>>> problem. I guess the problem is that the page is read (from disk). This 
> >>>> is
> >>>> unavoidable unless if you delete all rows within a page.
> >>>> What you could do is delete all rows of a table, using "truncate" or 
> >>>> "drop
> >>>> table".
> >>>> Regards,
> >>>> Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Reply via email to