Ok I needed a ::timestamptz at time zone 'UTC' and a >= :)
On 17 October 2017 at 22:29, Glenn Pierce wrote:
> Hi so I have a simple table as
>
> \d sensor_values_days;
> Table "public.sensor_values_days"
> Column | Type | Modifiers
> ---
On Tue, Oct 17, 2017 at 2:29 PM, Glenn Pierce wrote:
> and I have a simple query that fails
>
This is not failure, this is a query that found zero matching records.
>
> Ie
>
> SELECT sensor_id, MAX(ts), date_trunc('day', ts), COALESCE(MAX(value),
> 'NaN')::float FROM sensor_values_days WHERE
Hi so I have a simple table as
\d sensor_values_days;
Table "public.sensor_values_days"
Column | Type | Modifiers
---+--+--
ts| timestamp with time zone | not null
value | d
Here is the dumb part 😱 Turns out what caused my confusion that I had an
identical table in another schema and I used different sessions with a
different search_path with different results.
Thank you for helping me on this.
On Mon, Feb 6, 2017, 7:03 PM Vitaly Burovoy
wrote:
> On 2/6/17, Aron P
On 2/6/17, Aron Podrigal wrote:
> In general, I do not understand why a PK index should not be used when the
> query can be satisfied by the index itself. Can anyone give some reason to
> this?
>
> On Mon, Feb 6, 2017, 6:29 PM Aron Podrigal
> wrote:
>
>> EXPLAIN ANALYZE does not tell me much. It
On Mon, Feb 6, 2017 at 4:16 PM, Podrigal, Aron
wrote:
> Hi,
>
> I noticed when I do a simple SELECT id FROM mytable WHERE id =
> 'cb81d070-4213-465f-b32e-b8db43b83a25'::UUID Postgres does not use the
> primary key index and opts for a Seq Scan.
>
> I of course did VACUUM ANALYZE and I have reset
After resetting all statistics it still opts for a Seq Scan. I went ahead
to test with creating another table and querying that, and it shows on that
test table to be using the index. So I wonder if there is anything else
that may effect the planner. is there a way I can dog into this and see the
s
On 2/6/17, Podrigal, Aron wrote:
> Hi,
>
> I noticed when I do a simple SELECT id FROM mytable WHERE id =
> 'cb81d070-4213-465f-b32e-b8db43b83a25'::UUID Postgres does not use the
> primary key index and opts for a Seq Scan.
>
> I of course did VACUUM ANALYZE and I have reset statistics But no sig
In general, I do not understand why a PK index should not be used when the
query can be satisfied by the index itself. Can anyone give some reason to
this?
On Mon, Feb 6, 2017, 6:29 PM Aron Podrigal wrote:
> EXPLAIN ANALYZE does not tell me much. It doesn't say why the planner opts
> for not usi
EXPLAIN ANALYZE does not tell me much. It doesn't say why the planner opts
for not using the Primary key index.
On Mon, Feb 6, 2017, 6:23 PM Alban Hertroys wrote:
>
> > On 7 Feb 2017, at 0:16, Podrigal, Aron wrote:
> >
> > Hi,
> >
> > I noticed when I do a simple SELECT id FROM mytable WHERE id
> On 7 Feb 2017, at 0:16, Podrigal, Aron wrote:
>
> Hi,
>
> I noticed when I do a simple SELECT id FROM mytable WHERE id =
> 'cb81d070-4213-465f-b32e-b8db43b83a25'::UUID Postgres does not use the
> primary key index and opts for a Seq Scan.
>
> I of course did VACUUM ANALYZE and I have rese
Hi,
I noticed when I do a simple SELECT id FROM mytable WHERE id =
'cb81d070-4213-465f-b32e-b8db43b83a25'::UUID Postgres does not use the
primary key index and opts for a Seq Scan.
I of course did VACUUM ANALYZE and I have reset statistics But no sign. Is
there any particular thing I should be l
On Thu, Oct 15, 2015 at 1:38 PM, Emi wrote:
> Hello,
>
> For psql 8.3, is there a simple way to load xml file into table please?
>
> E.g.,
>
>
> True
> test1
> e1
>
>
> false
> test2
>
>
> Results:
> t1 (c1 text, c2 text, c3 text):
>
> c1| c2 | c3
> ---
On 10/15/2015 11:38 AM, Emi wrote:
Hello,
For psql 8.3, is there a simple way to load xml file into table please?
E.g.,
True
test1
e1
false
test2
Results:
t1 (c1 text, c2 text, c3 text):
c1| c2 | c3
-
true| test1 | e
Hello,
For psql 8.3, is there a simple way to load xml file into table please?
E.g.,
True
test1
e1
false
test2
Results:
t1 (c1 text, c2 text, c3 text):
c1| c2 | c3
-
true| test1 | e1
false | test2 | null
..
Tha
On 1/19/2015 4:58 PM, Robert DiFalco wrote:
Hometowns get selected and possibly inserted in unpredictable ways
even from multiple concurrent sessions. The only way I could figure
out how to solve it was to force each INSERT hometowns to be in its
own transaction.
So you fixed it - good. In ou
I don't think an advisory lock would remove the deadlock.
On Sun, Jan 18, 2015 at 10:33 PM, Roxanne Reid-Bennett
wrote:
> On 1/16/2015 2:41 AM, Jim Nasby wrote:
>
>> On 1/15/15 10:57 PM, Roxanne Reid-Bennett wrote:
>>
>>>
>>> try this: (if you still get deadlocks, uncomment the advisory lock
>>
Hometowns get selected and possibly inserted in unpredictable ways even
from multiple concurrent sessions. The only way I could figure out how to
solve it was to force each INSERT hometowns to be in its own transaction.
On Mon, Jan 19, 2015 at 1:56 PM, Robert DiFalco
wrote:
> I don't think an ad
On 1/16/2015 2:41 AM, Jim Nasby wrote:
On 1/15/15 10:57 PM, Roxanne Reid-Bennett wrote:
try this: (if you still get deadlocks, uncomment the advisory lock
[thanks Daniel] and try again)
Logically I suppose it might run faster to do the select, then insert
"if". I almost always write these a
Is there a way to force a new private transaction in a FUNCTION? That seems
like it would be a good solution here because I could simply do the insert
in the RAISE within its own private transaction. Then on the next iteration
of the loop (as long as I don't have repeatable reads) it should be pick
Robert DiFalco wrote:
> I must be doing something wrong because both of these approaches are giving
> me deadlock exceptions.
Deadlocks are to be expected if the INSERTs are batched within a single
transaction and there are several sessions doing this in parallel.
Given that there's an u
The code shown in the Doc (I think) will still give you deadlock in the
case where you have two sessions concurrently trying to insert the same
'hometown'. For example:
INSERT INTO users VALUES('Tom', select_hometown_id('Portland, OR'));
INSERT INTO users VALUES(''Waits', select_hometown_id('P
On 1/15/15 10:57 PM, Roxanne Reid-Bennett wrote:
try this: (if you still get deadlocks, uncomment the advisory lock [thanks
Daniel] and try again)
Logically I suppose it might run faster to do the select, then insert "if". I
almost always write these as insert first - because it's the more r
On 1/15/2015 6:12 PM, Robert DiFalco wrote:
FWIW I was using the select_hometown_id FUNCTION like this:
INSERT INTO users(...) values(..., select_hometown_id('Portland, OR'));
try this: (if you still get deadlocks, uncomment the advisory lock
[thanks Daniel] and try again)
Logically I suppos
On 1/14/15 8:28 AM, Daniel Verite wrote:
Roxanne Reid-Bennett wrote:
>When you have a sequence of steps that need to be serialized across
>processes, choose or even create a table to use for locking
This can also be done with an advisory lock, presumably faster:
http://www.postgresql.o
FWIW I was using the select_hometown_id FUNCTION like this:
INSERT INTO users(...) values(..., select_hometown_id('Portland, OR'));
On Tue, Jan 13, 2015 at 2:21 PM, Brian Dunavant wrote:
> The loop to run it twice handles that yes. I don't think that buys
> you anything over a more tradition
I must be doing something wrong because both of these approaches are giving
me deadlock exceptions.
On Tue, Jan 13, 2015 at 2:21 PM, Brian Dunavant wrote:
> The loop to run it twice handles that yes. I don't think that buys
> you anything over a more traditional non-cte method though. I'd run
Roxanne Reid-Bennett wrote:
> When you have a sequence of steps that need to be serialized across
> processes, choose or even create a table to use for locking
This can also be done with an advisory lock, presumably faster:
http://www.postgresql.org/docs/current/static/explicit-locking.ht
I don't like loops to catch "failure" condition... can you possibly fail
to stop?
In a stored procedure (or with auto-commit turned off in any
transaction)... You can avoid any race condition by using a semaphore
(e.g. you lock "something" for the duration of the critical part of your
process
John McKown wrote:
On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco
mailto:robert.difa...@gmail.com>>wrote:
Let's say I have two tables like this (I'm leaving stuff out for
simplicity):
CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1;
CREATE TABLE hometowns (
id INTE
Good points. I guess my feeling is that if there can be a race condition on
INSERT then the CTE version is not truly atomic, hence the LOOP.
On Tue, Jan 13, 2015 at 3:11 PM, Brian Dunavant wrote:
> A very good point, but it does not apply as here (and in my article)
> we are not using updates, o
A very good point, but it does not apply as here (and in my article)
we are not using updates, only insert and select.
On Tue, Jan 13, 2015 at 6:03 PM, Thomas Kellerer wrote:
> Brian Dunavant wrote on 13.01.2015 22:33:
>>
>> What issue are you having? I'd imagine you have a race condition on
>
Brian Dunavant wrote on 13.01.2015 22:33:
What issue are you having? I'd imagine you have a race condition on
the insert into hometowns, but you'd have that same race condition in
your app code using a more traditional 3 query version as well.
I often use CTEs like this to make things atomic.
The loop to run it twice handles that yes. I don't think that buys
you anything over a more traditional non-cte method though. I'd run
them a few thousand times to see if there's any difference in runtimes
but my guess is the CTE version would be slightly slower here.
v_id integer;
BEGIN;
sele
This seems to get rid of the INSERT race condition.
CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VARCHAR)
RETURNS INTEGER AS $
DECLARE hometown_id INTEGER;
BEGIN
LOOP
BEGIN
WITH sel AS (
SELECT id FROM hometowns WHERE name = hometown_name
), ins AS (
With the single CTE I don't believe you can do a full upsert loop. If
you're doing this inside of a postgres function, your changes are
already atomic, so I don't believe by switching you are buying
yourself much (if anything) by using a CTE query instead of something
more traditional here.
The a
Well, traditionally I would create a LOOP where I tried the SELECT, if
there was nothing I did the INSERT, if that raised an exception I would
repeat the LOOP.
What's the best way to do it with the CTE? Currently I have the following
which gives me Duplicate Key Exceptions when two sessions try to
What issue are you having? I'd imagine you have a race condition on
the insert into hometowns, but you'd have that same race condition in
your app code using a more traditional 3 query version as well.
I often use CTEs like this to make things atomic. It allows me to
remove transactional code ou
This CTE approach doesn't appear to play well with multiple concurrent
transactions/connections.
On Tue, Jan 13, 2015 at 10:05 AM, John McKown
wrote:
> On Tue, Jan 13, 2015 at 11:45 AM, Robert DiFalco > wrote:
>
>> Thanks John. I've been seeing a lot of examples like this lately. Does
>> the fo
On Tue, Jan 13, 2015 at 11:45 AM, Robert DiFalco
wrote:
> Thanks John. I've been seeing a lot of examples like this lately. Does the
> following approach have any advantages over traditional approaches?
>
>
> WITH sel AS (
> SELECT id FROM hometowns WHERE name = 'Portland'
> ), ins AS (
>
Thanks John. I've been seeing a lot of examples like this lately. Does the
following approach have any advantages over traditional approaches?
WITH sel AS (
SELECT id FROM hometowns WHERE name = 'Portland'
), ins AS (
INSERT INTO hometowns(name)
SELECT 'Portland'
WHERE NOT EXISTS (SE
On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco
wrote:
> Let's say I have two tables like this (I'm leaving stuff out for
> simplicity):
>
> CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1;
> CREATE TABLE hometowns (
> id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'),
> name VARCHAR,
>
Let's say I have two tables like this (I'm leaving stuff out for
simplicity):
CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1;
CREATE TABLE hometowns (
id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'),
name VARCHAR,
PRIMARY KEY (id),
UNIQUE(name)
);
CREATE SEQUENCE USER_SEQ_GEN START 1
Ok, thanks. I'll keep that in mind.
On Tue, Apr 1, 2014 at 7:45 PM, Andrew Sullivan wrote:
> On Tue, Apr 01, 2014 at 07:00:16PM -0400, Tom Lane wrote:
>
> > one of the clients, in a way that isn't visible to the deadlock detector.
> > One way for that to happen without any external interconnec
On Tue, Apr 01, 2014 at 07:00:16PM -0400, Tom Lane wrote:
> one of the clients, in a way that isn't visible to the deadlock detector.
> One way for that to happen without any external interconnections is if the
> client is waiting for a NOTIFY that will never arrive because the would-be
> sender i
Andrew Sullivan writes:
> Probably you could have killed one of the queries. But it sounds like
> what's happening is that you have multiple queries that are all trying
> to update the same rows in a different order. It may be that none of
> these is strictly deadlocked, in that no query is wait
Andrew Sullivan-8 wrote
> On Tue, Apr 01, 2014 at 01:37:17PM -0700, Si Chen wrote:
>> You are right. That was the problem. I tried the query from
>> http://wiki.postgresql.org/wiki/Lock_Monitoring and found a COMMIT
>> transaction that was blocking it.
>>
>> I restarted postgresql again, and (it
On Tue, Apr 01, 2014 at 01:37:17PM -0700, Si Chen wrote:
> You are right. That was the problem. I tried the query from
> http://wiki.postgresql.org/wiki/Lock_Monitoring and found a COMMIT
> transaction that was blocking it.
>
> I restarted postgresql again, and (it seems) everything went back to
, Apr 1, 2014 at 1:10 PM, Igor Neyman wrote:
>
>
> From: pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of Si Chen
> Sent: Tuesday, April 01, 2014 3:51 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] simple update
hey both be trying to update the same row, resulting in a deadlock?
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Si Chen
> *Sent:* Tuesday, April 01, 2014 3:51 PM
> *To:* pgsql-general@postgresql.org
> *Sub
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Si Chen
Sent: Tuesday, April 01, 2014 3:51 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] simple update query stuck
Hello,
I'm using postgresql 9.0.13, and I have a simple query
Do these queries update more than one row? I ran into a similar issue
a year ago, where two multi-row updates would deadlock because they
processed rows in a different order. I'd love to see UPDATE support
ORDER BY to fix this, but it doesn't yet. (If I ever try contributing
to Postgres, this is a
Hello,
I'm using postgresql 9.0.13, and I have a simple query that seems to be
stuck. I did a
postgres=# select procpid, query_start, waiting, current_query from
pg_stat_activity;
procpid | query_start | waiting |
current_query
32605 | 2014-04-01 12:39:
On Wed, Feb 12, 2014 at 12:17 AM, AlexK wrote:
> I would like to give my users the ability to invoke read-only functions and
> select statements, so that they can easily see the data. Both me and the
> users have experience mostly with SQL Server, so anyone can keep like 30
> connections without
2014-02-11 17:17 GMT+01:00 AlexK :
> I would like to give my users the ability to invoke read-only functions and
> select statements, so that they can easily see the data. Both me and the
> users have experience mostly with SQL Server, so anyone can keep like 30
> connections without much thinking
On 02/11/2014 08:17 AM, AlexK wrote:
I would like to give my users the ability to invoke read-only functions and
select statements, so that they can easily see the data. Both me and the
users have experience mostly with SQL Server, so anyone can keep like 30
connections without much thinking.
T
I would like to give my users the ability to invoke read-only functions and
select statements, so that they can easily see the data. Both me and the
users have experience mostly with SQL Server, so anyone can keep like 30
connections without much thinking.
Since too many open connections seems to
On Tue, Nov 12, 2013 at 2:47 PM, Tom Lane wrote:
> > That's right, we store 90 days and roll up data older than that into a
> > different table.
>
> Ah-hah. The default statistics target is 100, so indeed ANALYZE is going
> to be able to fit every date entry in the table into the
> most-common-v
Kevin Goess writes:
>> Now, the only way to get to a zero selectivity estimate for var = const
>> is if the planner believes that the pg_stats most-common-values list
>> for the column is complete, and the constant is nowhere in the list.
>> So one plausible explanation for the change in behavior
Thanks for the reply!
Your analysis matches everything I see here, so what you say is probably
the case. As to why it changed for us with the 9.0 => 9.2 upgrade, I also
don't know--the change was pretty dramatic though. Since we've compensated
for it, and since you say the current behavior is act
Kevin Goess writes:
> We noticed a big change after upgrading from 9.0 to 9.2. For
> *yesterday's*date, the query plan is fine, like you'd expect
> articles_1=> explain (analyze, buffers) select 1
> from hits_user_daily_count
> where userid = 1590185 and date = '2013-07-30';
> QU
Given this table
articles_1=> \d hits_user_daily_count;
Table "public.hits_user_daily_count"
Column | Type | Modifiers
---+-+---
userid| integer | not null
date | date| not null
num
On Mon, May 6, 2013 at 11:19 AM, Carlo Stonebanks
wrote:
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jasen Betts
> Sent: May 4, 2013 7:44 PM
> To: pgsql-general@postgresql.org
> Subject:
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jasen Betts
Sent: May 4, 2013 7:44 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Simple SQL INSERT to avoid duplication failed: why?
On 2013-05-01, Carlo
On 2013-05-01, Carlo Stonebanks wrote:
> There are no client poolers (unless pgtcl has one I don't know about) so
> this is unlikely.
>
> The trigger is an interesting idea to try if it happens again - I can't keep
> it for long as it is for a massive cache (used to deflect calls to a web
> servi
You can use free tool - Valentina Studio
http://www.valentina-db.com/en/valentina-studio-overview 14 Feb 2013 in the
5.0 version added support of PostgreSQL, as well as mySQL/mariaDB, SQLite.
It is FREE. Works on Mac, Win and Linux. Includes not only db management but
powerfull reports that work ag
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: May 1, 2013 11:37 AM
To: Carlo Stonebanks
Cc: Steven Schlansker; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Simple SQL INSERT to avoid duplication failed: why?
On Wed, May 1, 2013 at 7:16 AM, Carlo Stonebank
On May 1, 2013, at 9:36 AM, "Carlo Stonebanks"
wrote:
> I have to ask myself, is it more likely that I have discovered some PG
> anomaly in 9.0 that no one has ever noticed, or that the client has
> accidentally launched the process twice and doesn't know it?
Given my (admittedly limited) exper
, 2013 11:37 AM
To: Carlo Stonebanks
Cc: Steven Schlansker; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Simple SQL INSERT to avoid duplication failed: why?
On Wed, May 1, 2013 at 7:16 AM, Carlo Stonebanks
wrote:
> Very good to know, Steve. We're on 9.0 right now but I will
> i
On Wed, May 1, 2013 at 7:16 AM, Carlo Stonebanks
wrote:
> Very good to know, Steve. We're on 9.0 right now but I will investigate as
> all the work is for unattended automatic processes which are continuously
> streaming data from multiple resources and need to resolve these collisions
> by themse
al-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Steven Schlansker
Sent: April 30, 2013 7:10 PM
To: Carlo Stonebanks
Cc: 'Tom Lane'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Simple SQL INSERT to avoid duplication failed: why?
On Apr 30, 2013, at
"Carlo Stonebanks" writes:
> The only way I can see this happening is that an
> acache_mdx_logic_address_validation sneaks in before the insert and after
> the NOT EXISTS... SELECT. And for that to occur, the client must be mistaken
> and something else MUST be running and inserting into
> acache_
On Apr 30, 2013, at 4:00 PM, "Carlo Stonebanks"
wrote:
> Hi Tom,
>
>>> There's nothing obviously wrong with that, which means the issue is in
> something you didn't show us. Care to assemble a self-contained example?
> <<
>
> Unfortunately, it happens erratically and very, very rarely so I c
Hi Tom,
>> There's nothing obviously wrong with that, which means the issue is in
something you didn't show us. Care to assemble a self-contained example?
<<
Unfortunately, it happens erratically and very, very rarely so I can't give
you something that will fail. I expected an occasional failure
"Carlo Stonebanks" writes:
> Ok, I tried to be clever and I wrote code to avoid inserting duplicate data.
> The calling function has a try-catch to recover from this, but I am curious
> as to why it failed:
There's nothing obviously wrong with that, which means the issue is in
something you didn'
Ok, I tried to be clever and I wrote code to avoid inserting duplicate data.
The calling function has a try-catch to recover from this, but I am curious
as to why it failed:
INSERT INTO
mdx_lib.acache_mdx_logic_address_validation
(
address,
postal_code,
address_id
Hi Jose,
How much ram is in the server? Can you also post the following values from
postgres config:
work_mem
shared_buffers
Frequently when those are incorrectly configured, it can lead to significant
performance issues (like 40 second queries that should be sub-second). Out of
the box, Post
Thanks for your responses. Sorry, I forgot to mention that the query
actually takes 46 seconds despite what analyze (I dont quite understand the
output of explain). We did perform a vacuum last Friday and it seems to
help but not too much. We'll also try to recreate the indices.
Here's the output
On 2012-12-22 13:06:21 +0100, Alban Hertroys wrote:
> > and here's my query
> >
> > select * from ticket
> > where created between '2012-12-19 00:00:00' and '2012-12-20 00:00:00'
> >
> > This was working fine until the number of records started to grow (about 5
> > million) and now it's taking fo
> and here's my query
>
> select * from ticket
> where created between '2012-12-19 00:00:00' and '2012-12-20 00:00:00'
>
> This was working fine until the number of records started to grow (about 5
> million) and now it's taking forever to return.
>
> Explain analyze reveals this:
>
> "Inde
Hello,
I have the following table and indices defined:
CREATE TABLE ticket
(
wid bigint NOT NULL DEFAULT nextval('tickets_id_seq'::regclass),
eid bigint,
created timestamp with time zone NOT NULL DEFAULT now(),
status integer NOT NULL DEFAULT 0,
argsxml text,
moduleid character varyin
On Tue, Nov 20, 2012 at 1:25 PM, Dmitry Koterov wrote:
> Hello.
>
> Sometimes I see a strange slowdown on my PG 9.1 server: it looks like the
> simplest queries which typically take 1ms or less (e.g. selection of a row
> by its primary key) take 300ms or even more. It is related to all queries
> w
Hello.
Sometimes I see a strange slowdown on my PG 9.1 server: it looks like the
simplest queries which typically take 1ms or less (e.g. selection of a row
by its primary key) take 300ms or even more. It is related to all queries
within the connection, not the single one: once upon a time all fast
select trim(regexp_replace('123456', '...', '\& ', 'g'));
If you don't care about trailing space remove the 'trim()'.
The replacement string may need to be escaped differently depending
on your PG version and setting for standard_conforming_strings. For
example: E'\\& '
After combined
Emi Lu wrote:
> Good morning,
>
> Is there a simply method in psql to format a string?
>
> For example, adding a space to every three consecutive letters:
>
> abcdefgh -> *** *** ***
Depending on how you want 'extras' handled, you could work from
something like this:
select trim(regexp_repl
On 6/20/2012 12:59 PM, Scott Marlowe wrote:
This pretty much calls for a user defined function. Plpgsql should
work, but if you're more comfy in perl or tcl there's pl/tcl and
pl/perl you can try as well.
On Wed, Jun 20, 2012 at 8:43 AM, Emi Lu wrote:
Good morning,
Is there a simply method i
This pretty much calls for a user defined function. Plpgsql should
work, but if you're more comfy in perl or tcl there's pl/tcl and
pl/perl you can try as well.
On Wed, Jun 20, 2012 at 8:43 AM, Emi Lu wrote:
> Good morning,
>
> Is there a simply method in psql to format a string?
>
> For example
On Jun 20, 2012, at 7:43 AM, Emi Lu wrote:
> Good morning,
>
> Is there a simply method in psql to format a string?
>
> For example, adding a space to every three consecutive letters:
>
> abcdefgh -> *** *** ***
>
> Thanks a lot!
> Emi
I'm unaware of such a function (it seems like a generic f
Good morning,
Is there a simply method in psql to format a string?
For example, adding a space to every three consecutive letters:
abcdefgh -> *** *** ***
Thanks a lot!
Emi
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.p
Aha, generate_series, I got it. Thank you very much!!
I also tried left join, it seems that left join explain analyze returns
faster comparing with except:
select num as missing
from generate_series(5000, 22323) t(num)
left join t1 on (t.num = t1.id)
where t1.id is null
limit 10;
Emi
On
On 24 April 2012 16:15, Emi Lu wrote:
> Good morning,
>
> May I know is there a simple sql command which could return missing numbers
> please?
>
> For example,
>
> t1(id integer)
>
> values= 1, 2, 3 500
>
> select miss_num(id)
> from t1 ;
>
>
> Will return:
> ===
> 37, 800,
On Tue, Apr 24, 2012 at 10:15:26AM -0400, Emi Lu wrote:
> May I know is there a simple sql command which could return missing
> numbers please?
> For example,
> t1(id integer)
> values= 1, 2, 3 500
> select miss_num(id)
> from t1 ;
select generate_series( (select min(id) from t1), (sele
Good morning,
May I know is there a simple sql command which could return missing
numbers please?
For example,
t1(id integer)
values= 1, 2, 3 500
select miss_num(id)
from t1 ;
Will return:
===
37, 800, 8001
Thanks a lot!
Emi
--
Sent via pgsql-general mailing list
On 11/07/11 3:30 PM, Edson Richter wrote:
Thanks for the fast answer. Is there any way to avoid that? I mean, in
MS SQL Server, I do have "with no_lock" (that produces dirty reads)?
Or the way to go is the transaction isolation level?
there's no actual overhead in a single statement read trans
On Mon, Nov 7, 2011 at 3:30 PM, Edson Richter wrote:
> Thanks for the fast answer. Is there any way to avoid that? I mean, in MS
> SQL Server, I do have "with no_lock" (that produces dirty reads)?
> Or the way to go is the transaction isolation level?
The lowest level of Isolation supported by Po
Em 07-11-2011 20:54, John R Pierce escreveu:
On 11/07/11 2:41 PM, Edson Richter wrote:
Does simple SELECT query like
select * from tableX
(without FOR UPDATE) opens an implicit transaction when issued?
opens and closes.if you don't bracket with BEGIN;
COMMIT|ROLLBACK;then ea
On 11/07/11 2:41 PM, Edson Richter wrote:
Does simple SELECT query like
select * from tableX
(without FOR UPDATE) opens an implicit transaction when issued?
opens and closes.if you don't bracket with BEGIN;
COMMIT|ROLLBACK;then each statement is a transaction of and by itself
Does simple SELECT query like
select * from tableX
(without FOR UPDATE) opens an implicit transaction when issued?
Thanks,
--
Edson Carlos Ericksson Richter
Si
On Fri, May 13, 2011 at 2:07 AM, F T wrote:
> Thanks for your ideas.
>
> I have rerun my tests and I agree with Merlin, PostgreSQL is not adapted at
> all to handle wide updates.
>
> Summary :
> The table contains 2 millions rows.
>
> Test 1 :
> UPDATE grille SET inter=0; -> It tooks 10 hours
>
>
Hi,
I would suggest if you can try one of this options:
0- create a new index on " inter "column for grille table and in your WHERE
clause try to limit the number of update rows instead of 2mills for one
the whole transaction , something like :where inter > x and inter < y;
1- drop at least t
1 - 100 of 370 matches
Mail list logo