Is there a way to force a new private transaction in a FUNCTION? That seems
like it would be a good solution here because I could simply do the insert
in the RAISE within its own private transaction. Then on the next iteration
of the loop (as long as I don't have repeatable reads) it should be pick
On 01/16/2015 10:34 AM, Maciek Sakrejda wrote:
Thanks, everyone. That seems to be it. I still haven't gotten it to work
(I tried setting OPENSSL_DEFAULT_ZLIB=true in
/etc/postgresql/9.2/main/environment and restarting the server, then
adding the same env var when connecting with the client), but
Thanks, everyone. That seems to be it. I still haven't gotten it to work (I
tried setting OPENSSL_DEFAULT_ZLIB=true in
/etc/postgresql/9.2/main/environment and restarting the server, then adding
the same env var when connecting with the client), but now that I know
where the problem is, I think I c
Nathan, and all others,
I already have links between each node and each polygon at an initial state.
Creating a many-to-many table (nodes_polygons) and indexing it should be easy
at this point. However, if nodes are expected to remain static, new irregular
polygons will be added without being
Magnus Hagander writes:
> There's been a few reports on this now. Perhaps we should add a note to the
> docs (not necessarily saying how to fix it, as it may differ, but a note
> saying that many distributions changed the way this is handled and that you
> might need to set an external override)?
On Fri, Jan 16, 2015 at 8:41 AM, Adrian Klaver
wrote:
> On 01/16/2015 08:30 AM, Tom Lane wrote:
>
>> Maciek Sakrejda writes:
>>
>>> I'm having a hard time getting SSL compression working (or even figuring
>>> out why it's not working) with my local Postgres server. The setting [1]
>>> is
>>> doc
On 01/16/2015 08:30 AM, Tom Lane wrote:
Maciek Sakrejda writes:
I'm having a hard time getting SSL compression working (or even figuring
out why it's not working) with my local Postgres server. The setting [1] is
documented to default to on, but according to the banner when I connect
with psql,
Maciek Sakrejda writes:
> I'm having a hard time getting SSL compression working (or even figuring
> out why it's not working) with my local Postgres server. The setting [1] is
> documented to default to on, but according to the banner when I connect
> with psql, it's off.
Possibly you have the s
Chris Mair writes:
> ...
> Interestingly, if you combine these, it quickly blows up! The following query
> with a limit 1000 already
> has a RES of well over 1GB. With larger limits it quickly thrashes my machine.
> enrico=# explain analyze
> SELECT substring((field_id ->'comment')::text,1,1),
>
On 01/15/2015 01:02 PM, Maciek Sakrejda wrote:
I'm having a hard time getting SSL compression working (or even figuring
out why it's not working) with my local Postgres server. The setting [1]
is documented to default to on, but according to the banner when I
connect with psql, it's off. It's sti
Robert DiFalco wrote:
> I must be doing something wrong because both of these approaches are giving
> me deadlock exceptions.
Deadlocks are to be expected if the INSERTs are batched within a single
transaction and there are several sessions doing this in parallel.
Given that there's an u
I'm having a hard time getting SSL compression working (or even figuring
out why it's not working) with my local Postgres server. The setting [1] is
documented to default to on, but according to the banner when I connect
with psql, it's off. It's still off even if I explicitly set
PGSSLCOMPRESSION=
The code shown in the Doc (I think) will still give you deadlock in the
case where you have two sessions concurrently trying to insert the same
'hometown'. For example:
INSERT INTO users VALUES('Tom', select_hometown_id('Portland, OR'));
INSERT INTO users VALUES(''Waits', select_hometown_id('P
>> https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing
> Note: due an error in dump script, if you are in Linux/Unix environment, use
> this command for uncompressing the file:
>
> bzip2 -d -c comment_test.dump.bz2 |sed -e '12d' > comment_test.dump
Hi,
I've played a
I have written a test program using postgres that creates a string with a
surrogate pair. I then insert that string into a varchar property in a
table.
I then execute a select statement to pull the string out. But when I
evaluate the string the lead char of the pair is correct, but the following
Nathan,
I have to verify a few things before but it might be possible to proceed as you
suggest. I will also dig a bit Paul's suggestion on geohashing. I should get
you back once in place.
Thanks all
Daniel
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-gener
On 01/16/2015 11:22 AM, Enrico Bianchi wrote:
https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing
Note: due an error in dump script, if you are in Linux/Unix environment,
use this command for uncompressing the file:
bzip2 -d -c comment_test.dump.bz2 |sed -e '12d' > c
On 01/16/2015 09:58 AM, Enrico Bianchi wrote:
I've asked permission for these data
I've obtained the permission, here is available a subset of data large
enough to replicate the problem (note: you can simply run the query
without the where clause):
https://drive.google.com/file/d/0ByfjZX4Tabh
On 01/16/2015 01:19 AM, John R Pierce wrote:
you didn't do EXPLAIN ANALYZE, so your query plan statistics are all
estimates.
I know, but the EXPLAIN ANALYZE has the same problem of the query
Enrico
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your s
On 01/16/2015 02:18 AM, Tom Lane wrote:
Can we see the map?
This is the log when executing the query with a subset of data:
< 2015-01-16 08:47:43.517 GMT >DEBUG: StartTransactionCommand
< 2015-01-16 08:47:43.517 GMT >DEBUG: StartTransaction
< 2015-01-16 08:47:43.517 GMT >DEBUG: name: unnamed
Adrian,
On Thu, Jan 15, 2015 at 9:09 PM, Adrian Klaver
wrote:
> On 01/15/2015 07:59 PM, Pawel Veselov wrote:
>
>> Hi.
>>
>> I'm trying to cast hstore to json, but I don't seem to be getting a json
>> object from hstore key/value pairs.
>>
>>
[skipped]
> db=> SELECT to_json('"a"=>1, "b"=>2'::hs
Please let me one more guess ^^
Third guess : you are using topology (nodes are indexed by node_id).
- If this is the case, you could use postgis topology.
- The gain is that with this topology model, you store *shared linestring*,
and not shared points.
More seriously from what you say it
22 matches
Mail list logo