----- Цитат от Robert Haas (robertmh...@gmail.com), на 16.09.2014 в 22:20 ----- > > In practice, I'm not very surprised that the impact doesn't seem too > bad when you're running SQL queries from the client. There's so much > other overhead, for de-TOASTing and client communication and even just > planner and executor costs, that this gets lost in the noise. But > think about a PL/pgsql procedure, say, where somebody might loop over > all of the elements in array. If those operations go from O(1) to > O(n), then the loop goes from O(n) to O(n^2). I will bet you a > beverage of your choice that somebody will find that behavior within a > year of release and be dismayed by it. >
Hi, I can imagine situation exactly like that. We could use jsonb object to represent sparse vectors in the database where the key is the dimension and the value is the value. So they could easily grow to thousands of dimensions. Once you have than in the database it is easy to go and write some simple numeric computations on these vectors, let's say you want a dot product of 2 sparse vectors. If the random access inside one vector is going to O(n^2) then the dot product computation will be going to O(n^2*m^2), so not pretty. I am not saying that the DB is the right place to do this type of computations but it is somethimes convenient to have it also in the DB. Regards, luben -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers