Re: [HACKERS] Not In Foreign Key Constraint

2013-09-19 Thread Misa Simic
2013/9/19 David Johnston 

> Misa Simic wrote
> > I guess that rule can be achieved with triigers on TableA and TableC -
> but
> > the same is true for FK (and FK constraint is more effective then trigger
> > -
> > that is why I wonder would it be useful/achievable to create that kind of
> > constraint)
> >
> > Thoughts, ideas?
>
> You create a common "keys in use" table and only insert a record into the
> main tables if you can successfully add the desired key to the shared keys
> table ( as a unique value ).  Setup a normal FK to that table to help
> enforce that valid records must exist on the keys table.  Not fool-proof
> but
> you only need to worry about insertions - delete from the pk table to
> remove
> the record from the main table and free up the key.
>
> David J.
>
>
>
>
>

Thanks David,

Yes, that is one of ways that goal can be achieved via triggers (or to let
someone else worry about that Key is inserted/updated/deleted in Master
Table first...)

Constraint - should be more effective way... (It shouldnt be mixed with FK
constraint - even it is opposite on some kind... - it was just simplest way
to describe the feature)

And it should ensure that every row in table is valid from moment it is
created (what trigger can't ensure - constraint does it - or constraint
cant be created etc)

Thanks,

Misa


[HACKERS] Not In Foreign Key Constraint

2013-09-16 Thread Misa Simic
Hi hackers,

I just wonder how hard would be to implement something like "Not In FK
Constraint" or opposite to FK...

i.e:

 FK ensures that value of FK column of inserted row exists in refferenced
Table

 NotInFK should ensure  that value of NotInFK column of inserted row does
not Exist in referenced Table...


The only difference/problem I see is that adding that constraint on an
Table - Forces the same Constraint on another table (but in opposite
direction)


i.e.

TableA(tableA_pk, other_columns)
TableB(tableb_fk_tableA_pk, other_columns)
TableC(tablec_notInfk_tableA_pk, other_column)


each _pk column is Primary Key of its Table
TableB has on PK FK to TableA on the same time...

INSERT INTO TableA VALUES ('tableAPK1', 'somedata')

INSERT INTO TableB VALUES ('tableAPK1'. 'somedata')

everything ok,


now, we would like to Add NotInFK on TableC To TableA

INSERT INTO TableC VALUES ('tableAPK1'. 'somedata')

Should Fail - because of 'tableAPK1' exists in TableA

INSERT INTO TableC VALUES ('tableAPK2'. 'somedata')

Should pass - because of 'tableAPK2'  does not exist in TableA...

How ever, now

INSERT INTO TableA VALUES ('tableAPK2'. 'somedata')

should fail as well - because of that value exists in TableC


I guess that rule can be achieved with triigers on TableA and TableC - but
the same is true for FK (and FK constraint is more effective then trigger -
that is why I wonder would it be useful/achievable to create that kind of
constraint)

Thoughts, ideas?

Many thanks,

Misa








* *


[HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-25 Thread Misa Simic
Hi,

Where we can find latest snapshot for 9.3 version?

We have taken latest snapshot from
http://ftp.postgresql.org/pub/snapshot/dev/

But it seems it is for 9.4 version...

Thanks,

Misa


Re: [HACKERS] missing event trigger support functions in 9.3

2013-05-09 Thread Misa Simic
2013/5/9 Pavel Stehule 

> Hello
>
> I am writing a article about 9.3. I found so event trigger functions is
> not complete. We have only pg_event_trigger_dropped_objects() function. It
> looks really strange and asymmetric. Can we implement similar function for
> CREATE as minimum to 9.3?
>
> I am expecting so this function should not be too complex - and can be
> moved to contrib maybe (if it is too late now).
>
> Regards
>
> Pavel
>


Maybe as workaround can help:

CREATE OR REPLACE FUNCTION snitch() RETURNS event_trigger AS $$
DECLARE
my_cmd text;
BEGIN
SELECT query FROM pg_stat_activity where pid = pg_backend_pid() INTO
my_cmd;
RAISE NOTICE 'snitch: % % %', tg_event, tg_tag, my_cmd;
END;
$$ LANGUAGE plpgsql;

CREATE EVENT TRIGGER snitch ON ddl_command_start EXECUTE PROCEDURE snitch();

Then based on tg_tag - you can have different rules to take object_name
from my_cmd - or whatever else is needed for additional rules...

Kind Regards,

Misa


Re: [HACKERS] Graph datatype addition

2013-05-01 Thread Misa Simic
On Wednesday, May 1, 2013, Atri Sharma wrote:

> Hi all,
>
> Please find a probable prototype for the same:
>
> struct GraphNode
> {
> Oid NodeOid;// Oid of the row which is the node here. We will
> store an identifier to it here rather than the complete row(with data)
> itself.
> AdjacencyList *list;   // Pointer to the node's adjacency list.
> };
>
> struct AdjacencyList
> {
>   Oid[] neighbours_list;
> };
>
> struct AdjacencyList is probably the 'hottest' data structure in our
> entire implementation. We can think of making a cache of recently
> accessed struct AdjacencyList instances, or the AdjacencyList(s) of
> the neighbours of the recently accessed nodes, because, they are most
> likely to be accessed in near future. Advice here, please?
>
> So.
>
> struct AdjacencyCache
> {
>  Oid[] cache_values;
> };
>
> push and pop functions for AdjacencyCache follow.
>
> We need a replacement and invalidation algorithm for the cache. I feel
> a version of LRU should be good here.
>
> I have not given a prototype for operations and algorithm implementations.
>
> I feel,as suggested by Peter and Jaime, we can look at pgRouting code
> for algorithm implementations.
>
> Florian's concerns are mitigated here to some extent,IMO. Since the
> nodes and linkings are loosely coupled, and not represented as a
> single representation, updating or changing of any part or adding a
> new edge is no longer an expensive operation, as it only requires a
> lookup of GraphNode and then its AdjacencyList. If we use the cache as
> well, it will further reduce the lookup costs.
>
> I have not yet thought of the user visible layer as suggested by Jim.
> Probably. once we are ok with the internal layer, we can move to the
> user visible layer.
>
> Advice/Comments/Feedback please?
>
>
Honestly - I think I dont understand proposal...

Datatypes - are about values - what will be stored in that column in a
table

Datatype - cant have any clue about "rows"

How I understand what you described - you can achieve the same with pure
SQL - struct are equvalent to graph tables... Instead od Oid column will
store PKs of nodes table...


Re: [HACKERS] Graph datatype addition

2013-04-29 Thread Misa Simic
On Monday, April 29, 2013, Atri Sharma wrote:

> On Mon, Apr 29, 2013 at 10:12 PM, Misa Simic 
> >
> wrote:
> > Hi Atri,
> >
> > What is an example of custom internal representation and its JSON
> > representation (though and JSON and HStore represent its value as text)?
> >
> > I also think  that the key question is:  "what operations would you
> support
> > on this
> > data type?"
> >
> > Or what kind of problems it will solve? (what can't be solved now - or
> can
> > now - but new type will allow the better way...)
> >
> > Thanks,
> >
> > Misa
> >
> >
>
> Hi Misa,
>
> Thanks for thinking it through.
>
> I have not thought about it yet(I was going with the HStore
> representation till the moment, which I showed in my first mail in
> this thread) I believe that something on these lines could be done:
>
> Entity 1:
>
> Node: Node1
>
> Adjacency list: node2, node3, node4
>
> Entity 2:
>
> Node: Node 2
>
> Adjacency list: node1, node5
>
> Entity 3:
>
> Node: Node 3
>
> Adjacency list: node1, node4
>
> Adjacency list sets:
>
> "Node1"=>"Entity1","Node2"=>"Entity2","Node3"=>"Entity3"
>
> I mentioned the potential operations we could have in a previous
> mail.Specifically,
>
> I can think of the standard tasks, i.e. searching if two nodes are
> connected or not,adding new nodes and edges, traversing the adjacency
> lists of nodes.
>
> If we add support for weighted graphs, we can probably add support for
> some common graph algorithms, such as Djikstra's algorithm, Bellman
> Ford algorithm, a MST making algorithm, network flow algorithms.
>
> The main idea is to allow user to work with graphs pretty easily, and
> allow the user to use the data present in his database to make graphs
> and then process them.
>
> I think we find work arounds or make shifts at the moment if we need
> to use graphs in our database in postgres. If we have a datatype
> itself, with support for commonly used operations built inside the
> type itself, that will greatly simplify user's tasks, and open up a
> whole new avenue of applications for us, such as recommender systems,
> social network analysis, or anything that can be done with graphs.
>
>
  Hm...

Have you considered maybe ltree datatype?

To me all described sounds solveable on pure sql way ( + ltree datatype to
help with indexes and performance as materialised path to avoid recursive
query all the time...)

Though would be nice to see something new what would simplify the tasks...

Cheers,

Misa


Re: [HACKERS] Graph datatype addition

2013-04-29 Thread Misa Simic
Hi Merlin,

" Graphs are not currently able to be transported
out of the database currently via JSON"

What does it mean?

(I probably dont understand graphs well - but from my point of view - any
data can be transported out of DB via JSON)

Thanks,

Misa


2013/4/29 Merlin Moncure 

> On Mon, Apr 29, 2013 at 12:55 AM, Atri Sharma  wrote:
> >> It's probably pretty easy to add this, but I think the question is
> >> what would make it better than storing the same representation in a
> >> text field.
> >
> > I completely agree. The main point in making a new datatype would be
> > to add support for operations that are normally done with graphs.
> >
> >
> >>Obviously you get validation that the input is in the
> >> correct format, but you could do that with a CHECK constraint, too, or
> >> otherwise handle it in the application.  So I think the really
> >> interesting question is: what operations would you support on this
> >> data type?
> >
> > I can think of the standard tasks, i.e. searching if two nodes are
> > connected or not,adding new nodes and edges, traversing the adjacency
> > lists of nodes.
> >
> > If we add support for weighted graphs, we can probably add support for
> > some common graph algorithms, such as Djikstra's algorithm, Bellman
> > Ford algorithm, a MST making algorithm, network flow algorithms.
> >
> > The main idea is to allow user to work with graphs pretty easily, and
> > allow the user to use the data present in his database to make graphs
> > and then process them.
> >
> >> One of the problems you're likely to run into if you store the whole
> >> graph as a single object is that it may make many of the things you
> >> want to do with it not very efficient.
> >
> > Yes, I agree. On further thought, I believe it would be more of a pain
> > if we stick to representing the whole thing as one.Rather,making
> > multiple components will be more flexible and modular, and allow us to
> > modify different components of the same graph without modifying or
> > interfering with other components of the graph.
> >
> > I will think of a new design. I am still thinking of using HStore to
> > store adjacency lists. This should have good performance for access of
> > lists and similar tasks, IMO.
>
> This is an interesting idea.  Historically I've always decomposed
> graphs into relational structures because that's the only practical
> way to query them.   Graphs are not currently able to be transported
> out of the database currently via JSON so one of the areas to focus
> your research will be how the client will consume the data.
> libpqtypes is one way to do it, but that will really restrict you
> audience so you'll probably need a rich set of functions present the
> internal data (just like hstore).
>
> Another area to focus research will be on searchability: how to use
> GIST/GIN indexes to pull data out via an internal query string. An
> overview of the current GIST based type implementations (like ltree)
> couldn't hurt.
>
> merlin
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Graph datatype addition

2013-04-29 Thread Misa Simic
Hi Atri,

What is an example of custom internal representation and its JSON
representation (though and JSON and HStore represent its value as text)?

I also think  that the key question is:  "what operations would you support
on this
data type?"

Or what kind of problems it will solve? (what can't be solved now - or can
now - but new type will allow the better way...)

Thanks,

Misa


2013/4/29 Atri Sharma 

> >
> > I don't agree with this; JSON is not really designed to store graphs.
> > You will probably need a customized internal representation, just like
> > hstore, that expresses a graph like structure.
>
> Yes, we will have a custom internal representation. I was thinking of
> ways to export the graph into user parsable type, hence JSON.
>
>
>
> Regards,
>
> Atri
>
>
> --
> Regards,
>
> Atri
> l'apprenant
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Fwd: Range types (DATERANGE, TSTZRANGE) in a foreign key with "inclusion" logic

2013-04-07 Thread Misa Simic
Hi,

Far as I am aware - there is not yet described FK feature...

But should be possible to ensure that rule via trigger.. 1 after update on
container, and one after insert/update on item...


Kind Regards,

Misa

On Saturday, April 6, 2013, Matthias Nagel wrote:

> Hello,
> this is a re-post from the SQL user list 2 month ago, because I assume
> only a developer can answer the questions below.
> Thanks, Matthias Nagel
>
>
> --  Weitergeleitete Nachricht  --
>
> Betreff: Range types (DATERANGE, TSTZRANGE) in a foreign key with
> "inclusion" logic
> Datum: Mittwoch 23 Januar 2013, 11:28:10
> Von: Matthias Nagel >
> An: pgsql-...@postgresql.org 
>
> Hello everybody,
>
> first a big thank you to all that make the range types possible. They are
> great, especially if one runs a database to manage a student's university
> dormitory with a lot of temporal information like rental agreements, room
> allocations, etc. At the moment we are redesigning our database scheme for
> PosgreSQL 9.2, because the new range types and especially the "EXCLUSION"
> constraints allow to put a lot more (business) logic into the database
> scheme than before.
>
> But there is one feature missing (or I am too stupid to find it).
>
> Let's say we have some kind of container with a lifetime attribute, i.e.
> something like that
>
> CREATE TABLE container (
>   id SERIAL PRIMARY KEY,
>   lifetime DATERANGE
> );
>
> Further, there are items that must be part of the container and these
> items have a lifetime, too.
>
> CREATE TABLE item (
>   id SERIAL PRIMARY KEY,
>   container_id INTEGER,
>   lifetime DATERANGE,
>   FOREIGN KEY (container_id) REFERENCES container ( id ),
>   EXCLUDE USING gist ( container_id WITH =, lifetime WITH && )
> );
>
> The foreign key ensures that items are only put into containers that
> really exist and the exclude constraint ensure that only one item is member
> of the same container at any point of time.
>
> But actually I need a little bit more logic. The additional contraint is
> that items must only be put into those containers whose lifetime covers the
> lifetime of the item. If an item has a lifetime that exceeds the lifetime
> of the container, the item cannot be put into that container. If an item is
> already in a container (with valid lifetimes) and later the container or
> the item is updated such that either lifetime is modified and the contraint
> is not fullfilled any more, this update must fail.
>
> I would like to do someting like:
>
> FOREIGN KEY ( container_id, lifetime ) REFERENCES other_table ( id,
> lifetime ) USING gist ( container_id WITH =, lifetime WITH <@ )
>
> (Of course, this is PosgreSQL-pseudo-code, but it hopefully make clear
> what I want.)
>
> So, now my questions:
>
> 1) Does this kind of feature already exist in 9.2? If yes, a link to the
> documentation would be helpful.
>
> 2) If this feature does not directly exist, has anybody a good idea how to
> mimic the intended behaviour?
>
> 3) If neither 1) or 2) applies, are there any plans to integrate such a
> feature? I found this discussion
> http://www.postgresql.org/message-id/4f8bb9b0.5090...@darrenduncan.net .
> Does anybody know about the progress?
>
> Having range types and exclusion contraints are nice, as I said in the
> introdruction. But if the reverse (foreign key with inclusion) would also
> work, the range type feature would really be amazing.
>
>
> Best regards, Matthias Nagel
>
>
>
> --
> Matthias Nagel
> Willy-Andreas-Allee 1, Zimmer 506
> 76131 Karlsruhe
>
> Telefon: +49-721-8695-1506
> Mobil: +49-151-15998774
> e-Mail: matthias.h.na...@gmail.com 
> ICQ: 499797758
> Skype: nagmat84
>
> -
> --
> Matthias Nagel
> Willy-Andreas-Allee 1, Zimmer 506
> 76131 Karlsruhe
>
> Telefon: +49-721-8695-1506
> Mobil: +49-151-15998774
> e-Mail: matthias.h.na...@gmail.com 
> ICQ: 499797758
> Skype: nagmat84
>
>
>
> --
> Sent via pgsql-hackers mailing list 
> (pgsql-hackers@postgresql.org
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


[HACKERS] Altering Views

2013-02-20 Thread Misa Simic
Hi,

If we want to "add new column" to the view - the only one way (we have
found) is:

-drop view
-create it again ( with new column )

Now, if some other view depends on the view we want change - it will not
allow us to drop the view - what is fine, of course - but it does not allow
us to change it either (add column) even our change has not any effect on
other views what depend on this one...

So what we are doing is:

1. Take create scripts of all others views what depends on this one
2. Take create scripts of all others views what depends on any of views in
point 1 (recursively)

3. Drop the view (cascaded)

4. Create the view with the new column
5. Run create scripts taken from point 1 and 2...

Lot of hassle - just because of adding one more column to the view...

Is there any better way to alter view without hassle ?

If not - Are there any plans to allow it?

Many Thanks,

Misa


Re: [HACKERS] function for setting/getting same timestamp during whole transaction

2013-02-06 Thread Misa Simic
Hi,


I dont have access to pg at this moment... But:

BEGIN;

SELECT now();

SELECT clock_timestamp();

SELECT now();

SELECT pg_sleep(100);

SELECT now();

cCOMMIT;



Now() should always return the same, very first, result...




On Wednesday, February 6, 2013, Miroslav Šimulčík wrote:

> Hi all,
>
> I have deferred constraint update trigger in which I need to set same
> timestamp to all modified rows. The time needs to be the time of first
> invocation of this trigger fuction in transaciton. My intention is to set
> commit time to rows modified in transaction.
>
> So I need function that will store and return given timestamp on first
> call in transaction and on subsequent calls will return stored timestamp.
> This function have to be as fast as possible to minimize the inpact on
> performance of trigger.
>
> I have created a plpgsql function that uses temporal table for this task.
> On first invocation in transaction row with timestamp is inserted and on
> commit deleted. What I don't like is overhead with checks on table
> existence on each invocation. Here is code:
>
> CREATE OR REPLACE FUNCTION get_my_timestamp (
> IN in_initial_timestamp TIMESTAMPTZ
> ) RETURNS TIMESTAMPTZ AS
> $$
> DECLARE
> v_ret TIMESTAMPTZ;
> BEGIN
> --check temp table existence
> PERFORM
>  1
> FROM
> pg_catalog.pg_class c
> JOIN pg_catalog.pg_namespace n
>  ON n.oid = c.relnamespace
> WHERE
> c.relkind IN ('r','') AND
>  c.relname = 'timestamp_storage' AND
> pg_catalog.pg_table_is_visible(c.oid) AND
> n.nspname LIKE 'pg_temp%';
>  IF NOT FOUND THEN
> CREATE TEMP TABLE timestamp_storage (
>  my_timestamp TIMESTAMPTZ
> ) ON COMMIT DELETE ROWS;
> END IF;
>  --select timestamp
> SELECT
> my_timestamp
>  INTO
> v_ret
> FROM
> timestamp_storage;
>  IF NOT FOUND THEN
> INSERT INTO timestamp_storage(my_timestamp)
>  VALUES (in_initial_timestamp)
> RETURNING my_timestamp
> INTO v_ret;
>  END IF;
>
> RETURN v_ret;
> END;
> $$ LANGUAGE plpgsql;
>
> Example:
> begin;
> select get_my_timestamp(clock_timestamp());
>   get_my_timestamp
> 
>  2013-02-06 11:07:33.698+01
> select get_my_timestamp(clock_timestamp());
>   get_my_timestamp
> 
>  2013-02-06 11:07:33.698+01
> commit;
> select get_my_timestamp(clock_timestamp());
>   get_my_timestamp
> 
>  2013-02-06 11:09:02.406+01
>
> Is there any more effective way of accomplishing this? Maybe in different
> language.
>
> Regards,
> Miroslav Simulcik
>


Re: is JSON really "a type" (Re: [HACKERS] data to json enhancements)

2012-10-01 Thread Misa Simic
Datum_to_json



SELECT 'a=>1'::hstore, '1'::xml, '{"a":1}'


(Please note that last column is unknown – datatype)



Now, what is the main goal? to get:

1)

{

"hstore": "\"a\"=>\"1\"",

"xml": "1",

"?column?": "{\"a\":1}"

}


or:

2)

{

"hstore": {

"a": "1"

},

"xml": {

"a": "1"

},

"?column?": {

"a": 1

}

}


1)  is already possible to get now:



SELECT row_to_json(t) FROM (SELECT 'a=>1'::hstore, '1'::xml,
'{"a":1}') AS t



I don’t know how things work under the hood (haven’t taken a look on
row_to_json source…) But it says to me that there is already Datum_to_json
– somewhere…


2) Is not possible atm... but would be if we have CAST functions for each
DataType - I am not sure is it possible to write some generic function what
will convert any datype to JSON (or to an "intermediate" dynamic datatype)
without knowing specific things about concrete DataType (though I dont see
big difference will type itself provide _to_json or to_dynamic_type
function)...



Is JSON really a type?


I think it is... But just needs better handling... We have atm type JSON -
though we can't say


SELECT * FROM foo WHERE json_member("member_name", json_column) = 1;

I mean - we can't without plv8... to be more precise... We have used to use
plv8 - but it has became a bit slow how table grows... so we have made some
workarounds... with custom functions what improves response for above
query... however it is very customised... but maybe it could give some
ideas for indexing JSON...


basically...if we have table:

foo (id PK, some columns, json_column)

we have made another table:

index_json(id int, member_name ltree, json_value text) table with index
(member_name, json_value)

when we instert row in foo we also json_column value i.e. example from
desired result 2) above transfer to

1, 'hstore.a', '"1"'

1, 'xml.a', '"1"'

1, '?column?.a', '1'

and now when we want result for  SELECT * FROM foo WHERE
json_member("xml.a", json_column) = 1;

we actually asks SELECT id FROM index_json WHERE json_member = $json_member
and json_value = $json_value into my_ids

and then SELECT * FROM foo WHERE id = ANY(my_ids)


Also, to get SELECT * FROM foo as one JSON - atm, I think query is a
monster (without plv8), best would be if it is possible to provide some
shortcut support...


Kind regards,

Misa


Re: [HACKERS] data to json enhancements

2012-09-29 Thread Misa Simic
No probs...

And I did...The thing is, subject is to wide... Post too long... Intention
was just to better explain thoughts... I am not a blogger anyway, just new
in Postgres community... Trying to say, probably 90% of post would be
suficient just for the list, and because of i am new it is hard to me to
identify that right 10% peace :)

cheers,

Misa

On Saturday, September 29, 2012, Andrew Dunstan wrote:

>
>
> I think if you want to contribute you should post on the mailing list -
> otherwise the conversation just becomes way too fragmented.
>
> cheers
>
> andrew
>
>


Re: [HACKERS] data to json enhancements

2012-09-28 Thread Misa Simic
Hi Guys,

I have made some blog about the subject:

http://misasimic.blogspot.co.uk/2012/09/postgresql-92-and-json-datatype.html

Hopefully will help on some kind...

Kind Regards,

Misa


Re: [HACKERS] data to json enhancements

2012-09-27 Thread Misa Simic
Hm...

IMO,

'[1,2,3]'::json
'[1,2,3]'::text::json
'[1,2,3]'::int[]::json

are the same thing... (though I am not sure '[1,2,3]'::int[] is valid in
postgres...)

in js var o = JSON.parse(result_of_any_cast_above) should produce array of
3 integer

'[1,2,3]' is different then'"[1,2,3]"'

If there is the need to some text value as '[1,2,3]' be treated as JSON
text value, then it would be: quote_literal('[1,2,3]')::json

Kind Regards,

Misa
 ||




2012/9/27 Merlin Moncure 

> On Thu, Sep 27, 2012 at 8:22 AM, Robert Haas 
> wrote:
> > On Wed, Sep 26, 2012 at 1:46 PM, Tom Lane  wrote:
> >> Also, on reflection I'm not sure about commandeering cast-to-json for
> >> this --- aren't we really casting to "json member" or something like
> >> that?  The distinction between a container and its contents seems
> >> important here.  With a container type as source, it might be important
> >> to do something different if we're coercing it to a complete JSON
> >> value versus something that will be just one member.  I'm handwaving
> >> here because I don't feel like going back to re-read the RFC, but
> >> it seems like something that should be considered carefully before
> >> we lock down an assumption that there can never be a difference.
> >
> > I feel like there are two different behaviors that someone might want
> > here, and a cast cannot mean both.
> >
> > 1. Please reinterpret the existing value that I have already got as a
> > JSON object.  For example, you might have a text field in which you
> > have been storing JSON values.  Once you upgrade to 9.2, you might
> > want to reinterpret the existing contents of the field - which are
> > already valid JSON - as JSON objects.
> >
> > 2. Please convert the value that I have into a JSON object according
> > to a type-specific rule.  For example, you might have a text field in
> > which you store arbitrary strings.  But perhaps you need to store
> > structured data there, so once you upgrade to 9.2 you might want to
> > wrap up your strings inside JSON strings.
> >
> > Now there is some subtle ambiguity here because in some cases the
> > behavior can be exactly the same in both cases.  For example most
> > numeric values will get the same treatment either way, but NaN cannot.
> >  If you do mynumeric::json, interpretation #1 will fail for NaN but
> > interpretation #2 will probably produce something like "NaN".
> > Similarly if the type is boolean, we could likely get away with
> > producing true and false for either interpretation.  If the type is
> > hstore, then #1 is going to fail, but #2 is going to convert "1"=>"2"
> > to {"1":"2"}.  So in general it might seem that #2 is the better
> > interpretation, because it gives many casts a sensible interpretation
> > that is otherwise lacking.
> >
> > But, what about text?  It seems to me that users will count on the
> > fact that '[1,2,3]'::text::json is going to produce [1,2,3] (a JSON
> > array containing the first three numbers) and NOT "[1,2,3]" (a JSON
> > string containing 7 characters).  And that is emphatically
> > interpretation #1.
>
> Hm.  Well, that's a really good point although I kinda disagree with
> your assumption: I think it's much cleaner to have:
> select  '[1,2,3]'::int[]::json
> produce a json array.
>
> All types but text (record[] etc) would seem to use the type structure
> to define how the json gets laid out.  'text::json' is an exception,
> because there is an implied parse, which I'm starting to unfortunately
> think is the wrong behavior if you want to be able to make json datums
> out of sql datums: how do you create a vanilla json text datum?
>
> merlin
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-06-18 Thread Misa Simic
2012/6/18 Kevin Grittner 

> The many-to-one case seems like it is better handled in the other
> direction -- with the referenced table holding the set of valid keys
> and the referencing table holding the single key.  (I believe the
> general case of this is what Jeff called an "inclusion constraint"
> -- a feature he wants to add at some point.)  I can't think of a use
> case where that would not be better for this type of relationship
> than putting the array on the referencing side.
>
>
Hi Kevin,

Well, from my point of view "many-to-one" or "one-to-many" is more related
from which point we are looking to the thing... But let me better explain
what I thought...

Example 1)

If we have one table with master data TableA(ID, other properties...) and
TableB(tableAIDFK, propA, propB, propC) -PK of TableB is irrelavant in this
point... and let say a lot of TableA tuples could have the same TableB
properties... So we can have how many common TableA tuples, that many
tuples in TableB with the same values in PropA, PropB, and PropC with FK
the same type as in Table A, or to have 1 tuple in TableB with Array type
as FK field So it (1 tuple in TableB) can point many tuples in
TableC... And in the same time simple element can exist in TableA, but
could or doesn't have to exist in TableB...

What test would show is there any gain in this approach - I don't know...
but think it should - especially if propA,PropB, and C should be updated
for all of them...

Example 2)
>From other side, what Jeff propose, and what is also usefull, but different
thing is... to have the main data in TableA, but key field is an range
datatype... what later each element what belong to the range, could have
related tuple in TableB (Also, as the same range datatype - but smaller...
contained by Master one... or simple datatype subtype of the range) - which
is other way around... Opposite from exmaple 1 - but differnet from
functional point of view... Depending what is the Master... Also, for
example 1 - data in FK do not need to be in range.. so basicaly ID [1, 2 ,4
,7] could have 1 tuple with its properties, and [3,5,6] in second tuple
with different properties...

I am not sure Example 2) Jeff called "Inclusion Constraint" - Jeff can
explain it better :)

Based on

Simon Riggs wrote:
>> Do we need something like Exclusion FKs? i.e. the FK partner of
>> Exclusion Constraints?

>Yes, "Inclusion Constraints". I've known we need something like that
>since I did Exclusion Constraints, but I haven't gotten further than
>that.

>Regards,
>   Jeff Davis

I have understood it as:

TableA(ID, properties...)
TableB(ID, properties...)

Now if we define FK on TableB to TableA... It means that row inserted in
TableB, must have already row with the same ID value in TableA...

But what would be usefull, to define Exclude FK to table A,  to we prevent
insert new row in Table B with ID value what already exist in TableA...

btw, if anyone is happy to point me in right direction, and there is common
feeling it is usefull feature, I am happy to code it... Actually that is
something what I will code anyway for "in-house" solution - but would be
good to do it under Postgres standards...

Kind Regards,

Misa


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-06-17 Thread Misa Simic
2012/6/17 Kevin Grittner 

>
>
> Can someone provide a practical example of a "foreign key with array"
> use case?  The only situations I'm able to think of right now are the
> same cases where you would now use a table with primary keys of two
> tables to provide a many-to-many linkage.  Does this proposed feature
> handle other cases or handle this type of case better?
>


I can't imagine either other usablity... Just many-to-one linkage... or to
have many-to-many link with less rows in middle table... What is better - I
think should be measured...


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-06-17 Thread Misa Simic
2012/6/17 Simon Riggs 

>
> Do we need something like Exclusion FKs? i.e. the FK partner of
> Exclusion Constraints?
>

+1
Definatelly it would be something usefull...  Today's workaround to achieve
that with additional table, and additional column in Key is a bit
awkward...


>
> --
>  Simon Riggs   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] [PATCH] Support for foreign keys with arrays

2012-06-17 Thread Misa Simic
IMO, both approaches make sense...

>From temporal point no doubt, referencing should be contained by
referenced table

>From other side could be useful if in master table are elements with
simple data type, but for some set of elements there could be common
properties in another table.. What today is doable on the way to in
another table have the same data type and repeat the same properties
for each element...That would be possible with Range data type, though
it does not mean always data are in range so array is probably better
option...

However I am not sure from maintaining point of view, i,e when an
element should be removed from that common properties set - but it is
different topic :)

Kind Regards,

Misa

Sent from my Windows Phone

-Original Message-
From: Jeff Davis
Sent: 17/06/2012 08:55
To: Gabriele Bartolini
Cc: PostgreSQL-development; Marco Nenciarini
Subject: Re: [HACKERS] [PATCH] Support for foreign keys with arrays

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 Beta: intersection of daterange

2012-05-16 Thread Misa Simic
I think result is ok... 2010-01-04 is not inside first range...

Sent from my Windows Phone
From: Brar Piening
Sent: 16/05/2012 09:53
To: pgsql-hackers
Subject: [HACKERS] 9.2 Beta: intersection of daterange
I'm currently doing some tests on range types:

tests=# SELECT int8range(5,15) * int8range(10,20) AS intersection;
  intersection
--
  [10,15)
(1 Zeile)


tests=#
tests=# SELECT '[2010-03-15,2010-05-22)'::daterange *
'[2010-01-04,)'::daterange AS intersection;
   intersection
-
  [2010-03-15,2010-05-22)
(1 Zeile)


Does the second query trigger a bug or am I misunderstanding something?

I would expect [2010-01-04,2010-05-22) as result.

Regards,

Brar

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg 9.2 extension install

2012-03-06 Thread Misa Simic
OK, I have removed all *.o and all *.so files - and extension is
succesfully deployed on 9.2

Thanks,

Misa

2012/3/6 Misa Simic 

> Hi,
>
> I have made some pg extension for Pg 9.1 and want to deploy it on Pg
> 9.2dev...
>
> When i try create extension it shows error version mismatch server is 9.2
> library has made for 9.1...
>
> How to make library for 9.2?
>
> Thanks,
>
> Misa
>
> Sent from my Windows Phone
>


[HACKERS] Pg 9.2 extension install

2012-03-06 Thread Misa Simic
Hi,

I have made some pg extension for Pg 9.1 and want to deploy it on Pg
9.2dev...

When i try create extension it shows error version mismatch server is 9.2
library has made for 9.1...

How to make library for 9.2?

Thanks,

Misa

Sent from my Windows Phone


Re: [HACKERS] JSON for PG 9.2

2012-01-11 Thread Misa Simic
"
I am able to write array_to_json fce and Andrew can write query_to_json"

+1

Thanks guys...

We are using a lot of JSON as communication protocol...

having core support for JSON, And those functions, will be a real life
saver...

Many thanks,

Misa

Sent from my Windows Phone
From: Pavel Stehule
Sent: 11/01/2012 16:22
To: Robert Haas
Cc: David E. Wheeler; Claes Jakobsson; Dimitri Fontaine; Merlin
Moncure; Andrew Dunstan; Magnus Hagander; Jan Urbański; Simon Riggs;
Joey Adams; Bruce Momjian; PostgreSQL-development Hackers; Jan Wieck
Subject: Re: [HACKERS] JSON for PG 9.2
2012/1/11 Robert Haas :
> On Wed, Jan 11, 2012 at 9:41 AM, Pavel Stehule  
> wrote:
>> I understand it now. My opinion is so some operators and index search
>> can be in 9.2 - so use a JSON just as communication format now.
>>
>> * we need to build JSON
>> * we need to check if some is valid JSON
>> * we need to store JSON
>>
>> other steps should be (9.2)
>> * basic operators eq, neeq
>> * some tool like XQuery - simple query on JSON document available from
>> SQL that can be used for functional indexes.
>
> That would be nice, but let's not let the perfect be the enemy of the
> good.  We don't have a lot of time here.
>

sorry - replace 9.2 by 9.3 - I am sorry

I am able to write array_to_json fce and Andrew can write query_to_json

Pavel

> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UUID datatype GiST index support

2011-08-22 Thread Misa Simic
Thanks Alexander,

'Scalar datatype' - given me a hint...

Looking further in btree_gist source, for inet datatype, which seems a bit
complexier then uuid... (char, char, char[16]) structure for inet, compared
to jut char[16] for uuid.

GiST pattern works with double datatype... and there is method
convert_network_to_scalar(Datum, Oid), whick converts an inet value - to
scalar value... and then all index compare functions are based on the
doubles which leads me to conclusion (maybe is wrong) if I can convert
UUID value to double it would make a "job" a lot easier... and pretty
straight forward...

Any suggestion, how to convert UUID (char[16]) to scalar?

looking into convert inet to scalar, what confuses me, even there is
char[16] for an ip address... code is:

if (ip_family 
<http://doxygen.postgresql.org/network_8c.html#a7dc77a7bc93b675d36eca352d589b314>(ip)
== PGSQL_AF_INET
<http://doxygen.postgresql.org/utils_2inet_8h.html#a8ba3e5fe500d587d3eb8699968450b18>)
len = 4;
else
len = 5;

res = ip_family
<http://doxygen.postgresql.org/network_8c.html#a7dc77a7bc93b675d36eca352d589b314>(ip);
for (i = 0; i < len; i++)
{
res *= 256;
res += ip_addr
<http://doxygen.postgresql.org/network_8c.html#a54558e944989cddebdb93f2f6cd965a4>(ip)[i];
}
return res;



takes just first 4, or 5 values from ipaddres even there is 16 - (decalred
as char[16])

Many thanks,

Misa


2011/8/22 Alexander Korotkov 

> Hi!
>
> On Mon, Aug 22, 2011 at 2:54 PM, Misa Simic  wrote:
>
>> static int
>> m4_uuidkey_cmp(const void *a, const void *b)
>> {
>> uuidKEY*ia = (uuidKEY *) (((Usrt *) a)->t);
>>  uuidKEY*ib = (uuidKEY *) (((Usrt *) b)->t);
>> int res;
>>
>> res = DatumGetInt32(DirectFunctionCall2(uuid_cmp,
>> UUIDPGetDatum(ia->upper), UUIDPGetDatum(ia->upper)));
>>  if (res == 0)
>> return DatumGetInt32(DirectFunctionCall2(uuid_cmp,
>> UUIDPGetDatum(ia->upper), UUIDPGetDatum(ib->upper)));
>>
>> return res;
>> }
>>
>>
>> Getting error: aggregate error used where an integer was expected!
>>
> Seems that you need the address-of operator before ia->upper and ia->lower
> (likely one of operands should be "ia->lower"). UUIDPGetDatum except pointer
> as an argument, i.e. UUIDPGetDatum(&ia->upper).
>
>
>> It would be a lot appreciated if anyone could help me and suggest the best
>> way to make Gist support for UUID datatype...
>>
> I think you're on the right way. btree_gist is an extension which provides
> GiST indexing of scalar datatype. UUID is one of them. So, the module you
> are writing should be quite similar.
>
> --
> With best regards,
> Alexander Korotkov.


[HACKERS] UUID datatype GiST index support

2011-08-22 Thread Misa Simic
Hi,

Hopefully someone can help me and point me in right direction :)

I have been looking for GiST support extension for UUID datatype... since I
could not find it... I wanted to write it myself.

I need it more for EXCLUSION constraint - than to use GIST index just on
UUID column...

i.e:

CREATE TABLE test_exclude
(
id serial NOT NULL,
guid uuid NOT NULL,
valid_period period NOT NULL,
CONSTRAINT "test_excludepk" PRIMARY KEY (id),
EXCLUDE USING gist (guid WITH =, valid_period WITH &&) --for the same guid,
period must not overlap...
)

Has taken a look on btree_gist contrib source code... there are Gist support
functions for many datatypes, so I wanted to  take the same "pattern" and
make it...

however, problem happend in first line of code :) (tough I am comming from
totally different world - .Net)

pattern is:

typedef struct
{
  ADTdataType lower;
  ADTdataType upper;
} datatypeKEY;

i.e. for Date:

typedef struct
{
  DateADT lower;
  DateADT upper;
} dateKEY;


So I guessed for uuid would be:

typedef struct
{
  pg_uuid_t lower;
  pg_uuid_t upper;
} uuidKEY;

because of in pg uuid.h says:
 *In C, we use the name pg_uuid_t,
 *  to avoid conflicts with any uuid_t type that might be defined by the
system headers...

and there is:

/* opaque struct; defined in uuid.c */
typedef struct pg_uuid_t pg_uuid_t;


But compiler shows error: Field lower (and upper) has incopmplete
datatype

Succeded to avoid error with adding:

struct pg_uuid_t
{
  unsigned char data[UUID_LEN];
}

but then getting errors in "compare" functions:

i.e.

static int
m4_uuidkey_cmp(const void *a, const void *b)
{
uuidKEY*ia = (uuidKEY *) (((Usrt *) a)->t);
uuidKEY*ib = (uuidKEY *) (((Usrt *) b)->t);
int res;

res = DatumGetInt32(DirectFunctionCall2(uuid_cmp, UUIDPGetDatum(ia->upper),
UUIDPGetDatum(ia->upper)));
 if (res == 0)
return DatumGetInt32(DirectFunctionCall2(uuid_cmp, UUIDPGetDatum(ia->upper),
UUIDPGetDatum(ib->upper)));

return res;
}


Getting error: aggregate error used where an integer was expected!

It would be a lot appreciated if anyone could help me and suggest the best
way to make Gist support for UUID datatype...

Many thanks,

Misa