Re: [GENERAL] ~/.psqlrc file is ignored

2017-07-21 Thread Thom Brown
On 19 July 2017 at 20:12, vstuart  wrote:
> Hi David: I see what you are saying; sorry for the confusion. This is how
> postgres operates on my system:
>
> [victoria@victoria ~]$ echo $HOME
>   /home/victoria
>
> [victoria@victoria ~]$ which postgres
>   /usr/bin/postgres
>
> [victoria@victoria ~]$ postgres
>   postgres does not know where to find the server configuration file.
>   You must specify the --config-file or -D invocation option or set the
> PGDATA environment variable.
>
> [victoria@victoria ~]$ psql
>   psql: FATAL:  database "victoria" does not exist

By default, psql will attempt to connect to a database named after the
operating system user you are connected as.  It will also use that as
the database user name.  As you don't have a database of the same name
as your user account, you will need to specify it:

psql -d  -U 

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ~/.psqlrc file is ignored

2017-07-19 Thread Thom Brown
On 18 July 2017 at 19:02, vstuart  wrote:
> My ~/.psqlrc file is ignored by my PostgreSQL installation (v.9.6.3; Arch
> Linux x86_64 platform).
>
> Suggestions?

Do you get anything with "psql -a"?

If not, what do you get when you use "psql -af ~/.psqlrc" ?

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Very slow DELETEs with foreign keys

2016-02-08 Thread Thom Brown
Hi,

I've just noticed a general delete performance issue while testing a
patch, and this can be recreated on all recent major versions.

I have 2 tables:

CREATE TABLE countries (
country text PRIMARY KEY,
continent text
);

CREATE TABLE contacts (
id serial PRIMARY KEY,
first_name text,
last_name text,
age integer,
country text REFERENCES countries (country)
);


Here's a sample of the data in the tables:

# SELECT * FROM contacts LIMIT 10;
   id| first_name | last_name  | age | country
-+++-+--
 4873919 | Sharon | Blackburn  |  45 | Indonesia
 4873920 | Ila| Merrill|   3 | Zambia
 4873921 | Brian  | Rogers |  85 | Bahamas
 4873922 | Michelle   | Cunningham |  33 | Malta
 4873923 | Garrett| Thompson   |  17 | France
 4873924 | Jemima | Holloway   |  57 | Bahamas
 4873925 | Hector | Walls  |  82 | Kenya
 4873926 | Evangeline | Copeland   |  57 | Isle of Man
 4873927 | Montana| Cline  |   9 | Saint Vincent and The Grenadines
 4873928 | Reece  | Albert |  66 | Virgin Islands, United States
(10 rows)


# SELECT * FROM countries LIMIT 10;
  country   |   continent
+---
 Albania| Europe
 Algeria| Africa
 Andorra| Europe
 Angola | Africa
 Anguilla   | Caribbean
 Antarctica | Antarctica
 Argentina  | South America
 Armenia| Europe
 Aruba  | Caribbean
 Australia  | Australasia
(10 rows)

"contacts" contains 5 million rows
"countries" contains 498 rows


I then ran:

INSERT INTO countries
SELECT country || '1', continent || '2' FROM countries;

to duplicate all the rows, but with a number appended to the values.

But ran into a problem with:

# DELETE FROM countries WHERE continent LIKE '%2';
^CCancel request sent
ERROR:  canceling statement due to user request
CONTEXT:  SQL statement "SELECT 1 FROM ONLY "public"."contacts" x
WHERE $1 OPERATOR(pg_catalog.=) "country" FOR KEY SHARE OF x"
Time: 278560.623 ms

As you can see, I ran out of patience, but before I did, I ran a stack trace:

#0  ExecScan (node=node@entry=0x1a97f40,
accessMtd=accessMtd@entry=0x661063 ,
recheckMtd=recheckMtd@entry=0x661050 ) at execScan.c:236
#1  0x0066118b in ExecSeqScan (node=node@entry=0x1a97f40) at
nodeSeqscan.c:127
#2  0x0063ba4a in ExecProcNode (node=node@entry=0x1a97f40) at
execProcnode.c:419
#3  0x00658860 in ExecLockRows (node=node@entry=0x1a97d50) at
nodeLockRows.c:57
#4  0x0063bd67 in ExecProcNode (node=node@entry=0x1a97d50) at
execProcnode.c:527
#5  0x00636363 in ExecutePlan (estate=estate@entry=0x1a97b70,
planstate=0x1a97d50, use_parallel_mode=0 '\000',
operation=operation@entry=CMD_SELECT,
sendTuples=sendTuples@entry=1 '\001',
numberTuples=numberTuples@entry=1,
direction=direction@entry=ForwardScanDirection,
dest=dest@entry=0xde70c0 )
at execMain.c:1566
#6  0x006372ac in standard_ExecutorRun (queryDesc=0x1aadbf0,
direction=ForwardScanDirection, count=1) at execMain.c:338
#7  0x00637350 in ExecutorRun
(queryDesc=queryDesc@entry=0x1aadbf0,
direction=direction@entry=ForwardScanDirection, count=count@entry=1)
at execMain.c:286
#8  0x0066cdcf in _SPI_pquery
(queryDesc=queryDesc@entry=0x1aadbf0,
fire_triggers=fire_triggers@entry=0 '\000', tcount=1) at spi.c:2404
#9  0x0066f7a3 in _SPI_execute_plan
(plan=plan@entry=0x1aad790, paramLI=0x1aadba0,
snapshot=snapshot@entry=0x0,
crosscheck_snapshot=crosscheck_snapshot@entry=0x0,
read_only=read_only@entry=0 '\000',
fire_triggers=fire_triggers@entry=0 '\000', tcount=tcount@entry=1)
at spi.c:2192
#10 0x0066fcc8 in SPI_execute_snapshot
(plan=plan@entry=0x1aad790, Values=Values@entry=0x720c8e80,
Nulls=Nulls@entry=0x720c8e40 " ",
snapshot=snapshot@entry=0x0,
crosscheck_snapshot=crosscheck_snapshot@entry=0x0,
read_only=read_only@entry=0 '\000',
fire_triggers=fire_triggers@entry=0 '\000',
tcount=tcount@entry=1) at spi.c:489
#11 0x00873f1b in ri_PerformCheck
(riinfo=riinfo@entry=0x1a7cb70, qkey=qkey@entry=0x720c9360,
qplan=0x1aad790, fk_rel=fk_rel@entry=0x7fcdcde65480,
pk_rel=pk_rel@entry=0x7fcdcde5ee88,
old_tuple=old_tuple@entry=0x720c9940,
new_tuple=new_tuple@entry=0x0, detectNewRows=detectNewRows@entry=1
'\001',
expect_OK=expect_OK@entry=5) at ri_triggers.c:3142
#12 0x008746e7 in ri_restrict_del (trigdata=,
is_no_action=is_no_action@entry=1 '\001') at ri_triggers.c:773
#13 0x00875b07 in RI_FKey_noaction_del (fcinfo=0x720c9510)
at ri_triggers.c:613
#14 0x006118f0 in ExecCallTriggerFunc
(trigdata=trigdata@entry=0x720c9960, tgindx=tgindx@entry=0,
finfo=finfo@entry=0x1a01630, instr=instr@entry=0x0,
per_tuple_context=per_tuple_context@entry=0x1aa5c50) at trigger.c:1910
#15 0x00612fae in AfterTriggerExecute
(event=event@entry=0x1a7cea0, rel=rel@entry=0x7fcdcde5ee88,
trigdesc=trigdesc@entry=0x1a01490,

Re: [GENERAL] Very slow DELETEs with foreign keys

2016-02-08 Thread Thom Brown
On 8 February 2016 at 14:52, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Thom Brown <t...@linux.com> writes:
>> I've just noticed a general delete performance issue while testing a
>> patch, and this can be recreated on all recent major versions.
>
>> I have 2 tables:
>
>> CREATE TABLE countries (
>> country text PRIMARY KEY,
>> continent text
>> );
>
>> CREATE TABLE contacts (
>> id serial PRIMARY KEY,
>> first_name text,
>> last_name text,
>> age integer,
>> country text REFERENCES countries (country)
>> );
>
> Apparently, you don't have an index on the referencing column.
> That makes insertions into contacts faster, at the cost of making
> deletions from countries much slower.  Since there are cases where
> that's a reasonable tradeoff, we don't prohibit you from omitting
> the index ... but it is a pretty standard foot-gun.

Yeah, there's no index on contacts.country, and that would certainly
make the query acceptably quick, but I'm asking whether the
non-indexed scenario is going about things the most efficient way,
given what it has available to it.

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax error for Function

2016-01-20 Thread Thom Brown
On 20 January 2016 at 12:15, Sachin Srivastava  wrote:
> I am unable to find out the syntax error in below code, please suggest?
>
>
>
> ERROR:  syntax error at or near "select"
> LINE 44: select Count(0) into sFound  from budget_period ...
>  ^
> ** Error **
> ERROR: syntax error at or near "select"
> SQL state: 42601
> Character: 1190
>
> Code as below:
> -
>
> select Count(0) into sFound  from budget_period t where t.subscriber_id
> =subID
> and t.period_number = period and
> t.language_id=langCursor.Language_Id;
> if(sFound = 0)then
> insert into budget_period (subscriber_id, company_id,
> period_number, period_name,
> period_length_code, first_day, last_day,creation_date,
> creation_user, update_date, update_user, language_id)
> values(subID, compID, period,  curMonth,  'MONTH',
> firstDate, lastDate, LOCALTIMESTAMP, 'Admin',LOCALTIMESTAMP,
> 'Admin', langCursor.Language_Id);
> end if;
>
> 

Well, it says that the problem occurs on line 44, so what's on the
previous lines it's receiving?  Are you sending an unterminated query
prior to that?

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql 9.4 streaming replication

2015-10-19 Thread Thom Brown
On 19 October 2015 at 09:41, Sven Löschner  wrote:
> I inserted the following in my pg_hba.conf to test, but it does not work:
>
> hostreplication rep_user0.0.0.0/0 trust
> hostall postgres0.0.0.0/0 trust
>
> thank you in advance,
> Sven
>
>> WAL sender process starts when streaming client is active. The connection
>> is initiated by the client. That's why you cannot see it. It also uses the
>> TCP-port that is used by any other pg client. There is a special entry in
>> the pg_hba.conf for the replication clients. Refer to this please:
>> http://www.postgresql.org/docs/9.4/static/auth-pg-hba-conf.html.

Have you checked that the host name you provided resolves to the IP
address you're expecting?  Have you also tried connecting to the
primary server from the standby server manually?

e.g.: psql -h arcserver1 -p 5432 postgres

And perhaps look at the database logs for the standby server to see if
any error messages have come up when trying to connect to the primary.

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Thom Brown
On 7 October 2015 at 11:42, Andrus  wrote:
> Hi!
>
> Database idd owner is role idd_owner
> Database has 2 data schemas: public and firma1.
> User may have directly or indirectly assigned rights in this database and
> objects.
> User is not owner of any object. It has only rights assigned to objects.
>
> How to drop such  user ?
>
> I tried
>
> revoke all on all tables in schema public,firma1 from "vantaa" cascade;
> revoke all on all sequences in schema public,firma1 from "vantaa"
> cascade;
> revoke all on database idd from "vantaa" cascade;
> revoke all on all functions in schema public,firma1 from "vantaa"
> cascade;
> revoke all on schema public,firma1 from "vantaa" cascade;
> revoke idd_owner from "vantaa" cascade;
> ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1 revoke all ON TABLES
> from "vantaa";
> DROP ROLE if exists "vantaa"
>
> but got error
>
> role "vantaa" cannot be dropped because some objects depend on it
> DETAIL:  privileges for schema public
>
> in statement
>
> DROP ROLE if exists "vantaa"
>
> How to fix this so that user can dropped ?
>
> How to create sql or plpgsql method which takes user name as parameter and
> drops this user in all cases without dropping data ?
> Or maybe there is some command or simpler commands in postgres ?

The objects can't be owned by nothing, so you will need to reassign ownership:

REASSIGN OWNED BY old_role TO new_role;

e.g.

REASSIGN OWNED BY vantaa TO postgres;

Then you can drop the role.

Regards

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails to restore sequences

2015-09-28 Thread Thom Brown
On 28 September 2015 at 21:47, Tom Lane  wrote:
> Spencer Gardner  writes:
>> I'm transferring all of the databases on my old postgres server to a new
>> server. To do this I'm using pg_dump and then pg_restore:
>
>> pg_dump --host localhost --port 5432 --username "postgres" --format custom
>> --blobs --file ~/backups/census.backup census
>> --and then--
>> pg_restore -Cv -h localhost -p 5432 -U postgres -d postgres ./census.backup
>
>> The pg_restore gives me a series of errors about sequences not existing.
>> The database is restored with all data intact, but the sequences are not
>> recreated.
>
>> [ apparently due to ]
>
>> pg_restore: [archiver (db)] Error while PROCESSING TOC:
>> pg_restore: [archiver (db)] Error from TOC entry 193; 1259 27415 SEQUENCE
>> block_pop_hu_2010_pk_uid_seq gis
>> pg_restore: [archiver (db)] could not execute query: ERROR:  syntax error
>> at or near "USING"
>> LINE 7: USING local;
>> ^
>> Command was: CREATE SEQUENCE block_pop_hu_2010_pk_uid_seq
>> START WITH 1
>> INCREMENT BY 1
>> NO MINVALUE
>> NO MAXVALUE
>> CACHE 1...
>
> Seemingly, it's failing to recreate the sequences because of a syntax
> problem, but I do not see how a clause involving USING could have got
> into the CREATE SEQUENCE command.

Yes, this weirdly looks like it's been built with support for the
as-yet-incomplete sequence AM, which supports "USING local".  I don't
suppose this was dumped from a custom build to work with BDR?

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore fails to restore sequences

2015-09-28 Thread Thom Brown
On 28 September 2015 at 22:21, Spencer Gardner  wrote:
> Actually, yes. That's the reason for backing up. We had been playing with
> BDR on a custom build but have reverted to the stock Ubuntu build for the
> time being. So it sounds like the issue is caused by dumping from our custom
> BDR build. It's not really a big issue - I've already rebuilt the affected
> sequences.

Have you tried dumping the database using the stock pg_dump
executable?  The BDR branch isn't compatible with regular PostgreSQL,
at least not yet.

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Grant SELECT/Execute to View/Function but not underlying Table

2015-09-24 Thread Thom Brown
On 24 September 2015 at 12:28, Alex Magnum  wrote:
> Hi,
> is it possible to grant select to views and functions without the need to
> also grant the user the SELECT privileges to the Tables used in the views or
> functions?
>
> That way I could create read only users on a website and limit their access
> to the bare minimum.
>
> Thanks in advance for any advise on this

Yes.  For views, you just need to provide select access to the user,
but revoke general permissions from the public pseudo role.

Example:

postgres=# create user limited_user;
CREATE ROLE

postgres=# create table mydata (id serial primary key, content text);
CREATE TABLE

postgres=# insert into mydata (content) values ('blue'),('red'),('green');
INSERT 0 3

postgres=# revoke all on mydata from public;
REVOKE

postgres=# create view v_mydata as SELECT content from mydata;
CREATE VIEW

postgres=# grant select on v_mydata to limited_user;
GRANT

postgres=# \c - limited_user
You are now connected to database "postgres" as user "limited_user".

postgres=> select * from mydata;
ERROR:  permission denied for relation mydata

postgres=> select * from v_mydata;
 content
-
 blue
 red
 green
(3 rows)


With functions, you just set them up with the label SECURITY DEFINER.
This means that the function runs as the owner of the function, rather
than whomever is calling it:

postgres=# \c - postgres

postgres=# CREATE or replace FUNCTION get_colour(colour_id int)
returns text as $$
declare
  colour_name text;
begin
  select content into colour_name from mydata where id = colour_id;
  return colour_name;
end; $$ language plpgsql SECURITY DEFINER;

postgres=# revoke all on function get_colour(int) from public;
REVOKE

postgres=# grant execute on function get_colour(int) to limited_user;
GRANT

postgres=# \c - limited_user
You are now connected to database "postgres" as user "limited_user".

postgres=> select get_colour(2);
 get_colour

 red
(1 row)


Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclusively locking parent tables while disinheriting children.

2015-09-15 Thread Thom Brown
On 7 August 2015 at 12:34, Thom Brown <t...@linux.com> wrote:

>
> On 30 July 2015 at 13:35, Rowan Collins <rowan.coll...@gmail.com> wrote:
>
>> Hi,
>>
>> When working with partition sets, we're seeing occasional errors of
>> "could not find inherited attribute..." in Select queries. This is
>> apparently caused when an "ALTER TABLE ... NO INHERIT" runs concurrently
>> with another transaction selecting from the relevant child table.
>>
>> I found an old bug report filed against 8.3 back in 2008 [1] I can still
>> reproduce the test case in that report on 9.1.11 and 9.3.5, and it seems to
>> match what we're seeing in production.
>>
>> Tom Lane said at the time that a lock would cause more problems than it
>> solved [2], but when I add an explicit lock statement ("LOCK TABLE ONLY
>> p_tbl, c1_tbl IN ACCESS EXCLUSIVE MODE;") between "BEGIN" and "ALTER
>> TABLE", I get the behaviour I would expect - the SELECT blocks until the
>> transaction is committed, then returns rows from the remaining child table.
>>
>> So what I want to understand is what the risk of adding this lock are -
>> under what circumstances would I expect to see dead locks if I manually
>> added this lock to my partition maintenance functions?
>>
>
> I'm not clear on the problems such a change would present either, but I'm
> probably overlooking the relevant scenario.
>

Has anyone got insight as to what's wrong with exclusively locking a parent
table to disinherit a child table?

Thom


Re: [GENERAL] Exclusively locking parent tables while disinheriting children.

2015-08-07 Thread Thom Brown
On 30 July 2015 at 13:35, Rowan Collins rowan.coll...@gmail.com wrote:

 Hi,

 When working with partition sets, we're seeing occasional errors of could
 not find inherited attribute... in Select queries. This is apparently
 caused when an ALTER TABLE ... NO INHERIT runs concurrently with another
 transaction selecting from the relevant child table.

 I found an old bug report filed against 8.3 back in 2008 [1] I can still
 reproduce the test case in that report on 9.1.11 and 9.3.5, and it seems to
 match what we're seeing in production.

 Tom Lane said at the time that a lock would cause more problems than it
 solved [2], but when I add an explicit lock statement (LOCK TABLE ONLY
 p_tbl, c1_tbl IN ACCESS EXCLUSIVE MODE;) between BEGIN and ALTER
 TABLE, I get the behaviour I would expect - the SELECT blocks until the
 transaction is committed, then returns rows from the remaining child table.

 So what I want to understand is what the risk of adding this lock are -
 under what circumstances would I expect to see dead locks if I manually
 added this lock to my partition maintenance functions?


I'm not clear on the problems such a change would present either, but I'm
probably overlooking the relevant scenario.

Thom


Re: [GENERAL] SET LOCAL synchronous_commit TO OFF

2015-06-11 Thread Thom Brown
On 11 June 2015 at 17:34, Robert DiFalco robert.difa...@gmail.com wrote:
 I want to make sure I understand the repercussions of this before making it
 a global setting.

 As far as I can tell this will put data/referential integrity at risk. It
 only means that there is a period of time (maybe 600 msecs) between when a
 commit occurs and when that data is safe in the case of a server crash.

There should be no risk to referential integrity.  All it means is
that the changes won't definitely be in the WAL on disk at the time of
a command reporting that it has successfully completed.  If a crash
were to occur, any changes that would depend on such a command
wouldn't be committed either, so the database should remain in a
consistent state.

So this only risks loss of changes over a short period of time, not
risk of corruption or loss of integrity.

Regards

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is default password for user postgres

2014-12-01 Thread Thom Brown
On 1 December 2014 at 09:08, M Tarkeshwar Rao m.tarkeshwar@ericsson.com
 wrote:

  Hi all,



 I installed version 9.1 in my Ubuntu OS, but not able to login.

 What is default password for user Postgres?


The postgres user doesn't have a password by default, which is probably how
you should keep it.  Typically the pg_hba.conf file (which you'll find in
/etc/postgresql/9.1/main/pg_hba.conf) contains an entry like:

local   all all trust

So reading from left to right, this allows local connections to all
databases for all users using trust authentication, meaning it will believe
you're the postgres user if you say you are, and can be trusted, as long as
you're connecting over a local connection (i.e. using a unix domain
socket).  So become the postgres user:

sudo su - postgres

and then you should just be able to connect to the database without any
issues:

postgres@swift:~$ psql postgres postgres
psql (9.3.1)
Type help for help.

Regards

Thom


Re: [GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-28 Thread Thom Brown
On 28 October 2014 15:10, Andrus kobrule...@hot.ee wrote:

 Hi!

 I'm looking for finding ealiest possible start times from reservations
 table.

 People work from 10:00AM to 21:00PM in every week day except Sunday and
 public holidays.

 Jobs for them are reserved at 15 minute intervals and whole job must fit
 to single day.
 Job duration is from 15 minutes to 4 hours.

 Reservat table contains reservations, yksus2 table contains workes and
 pyha table contains public holidays. Table structures are below. Reservat
 structure can changed if this helps.

 How to first earliest 30 possible start times considering existing
 reservations ?

 For example, Mary has already reservation at 12:30 .. 16:00 and
 John has already reservation at 12:00 to 13:00

 In this case query for job with duration of 1.5 hours should return

John 2014-10-28 10:00
Mary 2014-10-28 10:00
John 2014-10-28 10:30
Mary 2014-10-28 10:30
Mary 2014-10-28 11:00
John 2014-10-28 13:00
Mary 2014-10-28 16:00
Mary 2014-10-28 16:30
... etc and also starting from next days

 I tried query based on answer in http://stackoverflow.com/
 questions/13433863/how-to-return-only-work-time-from-
 reservations-in-postgresql below but it returns wrong result:

MARY  2014-10-28 13:00:00
MARY  2014-10-29 22:34:40.850255
JOHN  2014-10-30 22:34:40.850255
MARY  2014-10-31 22:34:40.850255
MARY  2014-11-03 22:34:40.850255

 Also sliding start times 10:00, 10:30 etc are not returned.

 How to get proper first reservations ?

 Query which I tried is

insert into reservat (objekt2, during) values
('MARY', '[2014-10-28 11:30:00,2014-10-28 13:00:00)'),
('JOHN', '[2014-10-28 10:00:00,2014-10-28 11:30:00)');

with gaps as (
select
yksus,
upper(during) as start,
lead(lower(during),1,upper(during)) over (ORDER BY during) -
 upper(during) as gap
from (
select
   yksus2.yksus,
   during
  from reservat join yksus2 on reservat.objekt2=yksus2.yksus
  where  upper(during)= current_date
union all
select
yksus2.yksus,
unnest(case
when pyha is not null then array[tsrange1(d, d +
 interval '1 day')]
when date_part('dow', d) in (0, 6) then
 array[tsrange1(d, d + interval '1 day')]
when d::date =  current_Date then array[
tsrange1(d, current_timestamp ),
tsrange1(d + interval '20 hours', d +
 interval '1 day')]
else array[tsrange1(d, d + interval '8 hours'),
   tsrange1(d + interval '20 hours', d +
 interval '1 day')]
end)
from yksus2, generate_series(
current_timestamp,
current_timestamp + interval '1 month',
interval '1 day'
) as s(d)
left join pyha on pyha = d::date
) as x
)

select yksus, start
  from gaps
where gap = interval'1hour 30 minutes'
order by start
limit 30


 Schema:

CREATE EXTENSION btree_gist;
CREATE TABLE Reservat (
  id serial primary key,
  objekt2 char(10) not null references yksus2 on update cascade
 deferrable,
  during tsrange not null check(
 lower(during)::date = upper(during)::date
 and lower(during) between current_date and current_date+
 interval'1 month'

 and (lower(during)::time = '10:00'::time and
 upper(during)::time  '21:00'::time)
 AND EXTRACT(MINUTE FROM lower(during)) IN (0, 15, 30,45)
  AND EXTRACT(MINUTE FROM upper(during)) IN (0, 15, 30, 45)
 and (date_part('dow', lower(during)) in (1,2,3,4,5,6)
 and date_part('dow', upper(during)) in (1,2,3,4,5,6))
  ),

  EXCLUDE USING gist (objekt2 WITH =, during WITH )
);

create or replace function holiday_check() returns trigger language
 plpgsql stable as $$
begin
if exists (select * from pyha  where pyha in
 (lower(NEW.during)::date, upper(NEW.during)::date)) then
raise exception 'public holiday %', lower(NEW.during) ;
else
return NEW;
end if;
end;
$$;

create trigger holiday_check_i before insert or update on Reservat for
 each row execute procedure holiday_check();

CREATE OR REPLACE FUNCTION public.tsrange1(start timestamp with time
 zone,
finish timestamp with time zone ) RETURNS tsrange AS
$BODY$
SELECT tsrange(start::timestamp without time zone, finish::timestamp
 without time zone );
$BODY$ language sql immutable;


-- Workers
create table yksus2( yksus char(10) primary key);
insert into yksus2 values ('JOHN'), ('MARY');

-- public holidays
create table pyha( pyha date primary key);


Re: [GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-28 Thread Thom Brown
On 28 October 2014 19:14, Andrus kobrule...@hot.ee wrote:

   Hi!

  Would you be able to adapt this to your needs?:

  Thank you very much.
 Great solution.
  I refactored it as shown below.
  Query returns only dates for single day. Changing limit clause to 300
 does not return next day.
  How to return other day dates also, excluding sundays and public
 holidays in pyha table ?


It's not a robust solution if you need it to span days, but you could just
increment the 2nd timestamptz parameter in the generate_series function
call by a year:

generate_series('2014-10-28 10:00+2'::timestamptz, '2015-10-28 21:00+2',
'15 mins'::interval)

It's hacky, but it should work, but if you happened to have a policy
whereby reservations couldn't be made beyond, say, 3 months in advance, you
could just give it a date 3 months in the future, and make sure that the
first parameter is capped to the same range.

So here's an example of what you could do (although it could probably be
simplified and made more elegant).  Here it will find times from the
current time until 3 months in the future.  It also filters out holiday
dates.

SELECT yksus2.yksus, times.period
FROM generate_series(now()::date::timestamptz, now()::date::timestamptz +
'3 months'::interval, '15 mins'::interval) times(period)
CROSS JOIN yksus2
LEFT JOIN reservat ON tstzrange(times.period,times.period + '1 hour 30
mins'::interval, '[)')  reservat.during
  AND yksus2.yksus = reservat.objekt2
LEFT JOIN pyha ON times.period::date = pyha.pyha::date
WHERE reservat.during IS NULL
  AND pyha.pyha IS NULL
  AND times.period::time BETWEEN '10:00'::time AND '21:00'::time
  AND times.period = now()
ORDER BY 2, 1
LIMIT 300;

-- 
Thom


Re: [GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-28 Thread Thom Brown
On 28 October 2014 20:04, Thom Brown t...@linux.com wrote:

 On 28 October 2014 19:14, Andrus kobrule...@hot.ee wrote:

   Hi!

  Would you be able to adapt this to your needs?:

  Thank you very much.
 Great solution.
  I refactored it as shown below.
  Query returns only dates for single day. Changing limit clause to 300
 does not return next day.
  How to return other day dates also, excluding sundays and public
 holidays in pyha table ?


 It's not a robust solution if you need it to span days, but you could just
 increment the 2nd timestamptz parameter in the generate_series function
 call by a year:

 generate_series('2014-10-28 10:00+2'::timestamptz, '2015-10-28 21:00+2',
 '15 mins'::interval)

 It's hacky, but it should work, but if you happened to have a policy
 whereby reservations couldn't be made beyond, say, 3 months in advance, you
 could just give it a date 3 months in the future, and make sure that the
 first parameter is capped to the same range.

 So here's an example of what you could do (although it could probably be
 simplified and made more elegant).  Here it will find times from the
 current time until 3 months in the future.  It also filters out holiday
 dates.

 SELECT yksus2.yksus, times.period
 FROM generate_series(now()::date::timestamptz, now()::date::timestamptz +
 '3 months'::interval, '15 mins'::interval) times(period)
 CROSS JOIN yksus2
 LEFT JOIN reservat ON tstzrange(times.period,times.period + '1 hour 30
 mins'::interval, '[)')  reservat.during
   AND yksus2.yksus = reservat.objekt2
 LEFT JOIN pyha ON times.period::date = pyha.pyha::date
 WHERE reservat.during IS NULL
   AND pyha.pyha IS NULL
   AND times.period::time BETWEEN '10:00'::time AND '21:00'::time
   AND times.period = now()
 ORDER BY 2, 1
 LIMIT 300;


A correction to this.  As it stands, it will show times like the following:

   yksus| period
+
...
 JOHN   | 2014-10-30 19:45:00+00
 MARY   | 2014-10-30 19:45:00+00
 JOHN   | 2014-10-30 20:00:00+00
 MARY   | 2014-10-30 20:00:00+00
 JOHN   | 2014-10-30 20:15:00+00
 MARY   | 2014-10-30 20:15:00+00
 JOHN   | 2014-10-30 20:30:00+00
 MARY   | 2014-10-30 20:30:00+00
 JOHN   | 2014-10-30 20:45:00+00
 MARY   | 2014-10-30 20:45:00+00
 JOHN   | 2014-10-30 21:00:00+00
 MARY   | 2014-10-30 21:00:00+00
 JOHN   | 2014-10-31 10:00:00+00
 MARY   | 2014-10-31 10:00:00+00
...

This is incorrect a 1.5 hour appointment after 19:30 would go beyond the
working hours.  So that needs to be factored into it:

SELECT yksus2.yksus, times.period
FROM generate_series(now()::date::timestamptz, now()::date::timestamptz +
'3 months'::interval, '15 mins'::interval) times(period)
CROSS JOIN yksus2
LEFT JOIN reservat ON tstzrange(times.period,times.period + '1 hour 30
mins'::interval, '[)')  reservat.during
  AND yksus2.yksus = reservat.objekt2
LEFT JOIN pyha ON times.period::date = pyha.pyha::date
WHERE reservat.during IS NULL
  AND pyha.pyha IS NULL
  AND times.period::timetz BETWEEN '10:00'::timetz AND '21:00'::timetz - '1
hour 30 mins'::interval
  AND times.period = now()
ORDER BY 2, 1
LIMIT 300;

This gives you:

   yksus| period
+
...
 JOHN   | 2014-10-30 19:15:00+00
 MARY   | 2014-10-30 19:15:00+00
 JOHN   | 2014-10-30 19:30:00+00
 MARY   | 2014-10-30 19:30:00+00
 JOHN   | 2014-10-31 10:00:00+00
 MARY   | 2014-10-31 10:00:00+00
 JOHN   | 2014-10-31 10:15:00+00
 MARY   | 2014-10-31 10:15:00+00
...

Regards

Thom


Re: [GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-28 Thread Thom Brown
On 28 October 2014 21:07, Andrus kobrule...@hot.ee wrote:

   Hi!

  A correction to this.  As it stands, it will show times like the
 following:

  Thank you.
  I posted your solution as alternative to Erwin answer in

 http://stackoverflow.com/questions/26608683/how-to-find-first-free-start-times-from-reservations-in-postgres



A further tweak; add the following to the WHERE clause:

AND EXTRACT(DOW FROM times.period) != 0

This will ensure Sundays are excluded.  I don't know if you want Saturdays
excluded, but you can easily adjust it for that.

Thom


[GENERAL] Non-capturing expressions

2014-10-25 Thread Thom Brown
Hi all,

It must be that I haven't had enough caffeine today, but I can't figure out
why the following expression captures the non-capturing part of the text:

# SELECT regexp_matches('postgres','(?:g)r');
 regexp_matches

 {gr}
(1 row)

I'm expecting '{r}' in the output as I thought this would use ARE mode by
default.

Thom


Re: [GENERAL] Non-capturing expressions

2014-10-25 Thread Thom Brown
On 25 October 2014 11:49, Francisco Olarte fola...@peoplecall.com wrote:

 Hi Thom:

 On Sat, Oct 25, 2014 at 11:24 AM, Thom Brown t...@linux.com wrote:

 It must be that I haven't had enough caffeine today, but I can't figure
 out why the following expression captures the non-capturing part of the
 text:
 # SELECT regexp_matches('postgres','(?:g)r');
  regexp_matches
 
  {gr}
 (1 row)


 Section 9.7.3, search for 'If the pattern contains no parenthesized
 subexpressions, then each row returned is a single-element text array
 containing the substring matching the whole pattern.'


Ah, I knew I missed something:

# SELECT regexp_matches('postgres','(?:g)(r)');
 regexp_matches

 {r}
(1 row)

Although I can see it's redundant in this form.



 I'm expecting '{r}' in the output as I thought this would use ARE mode by
 default.


 Why r ? Your pattern is exactly the same as 'gr'. NOTHING gets captured.
 To get that you'll need the opposite 'g(r)' to capture it. By default
 nothing gets captured, the (?:...) construction is used because () does
 GROUPING and CAPTURING, and sometimes you want grouping WITHOUT capturing.


I'm familiar with regular expression syntax, just famliarising myself with
PostgreSQL's syntax flavour.

Thanks

Thom


Re: [GENERAL] hstore - jsonb

2014-04-04 Thread Thom Brown
On 4 April 2014 13:04, Oleg Bartunov obartu...@gmail.com wrote:
 On Fri, Apr 4, 2014 at 12:20 PM, Armand Turpel
 armand.turpel.m...@gmail.com wrote:
 Hi,

 A few questions about jsonb and hstore:

 1. does jsonb replace hstore?

 no, it's different data type

 2. compatibility of jsonb  hstore?

 hstore is a simple key-value model, while jsonb - richer  document-based 
 model.

 3. will the development of hstore continue?

 I don't believe so, we froze nested hstore to concentrate development
 resources to jsonb.   Nested hstore still available from
 http://www.sigaev.ru/git/gitweb.cgi?p=hstore.git;a=summary
 It's should be compatible with 9.3


 4. is it recommended to use jsonb when planning new projects?

 yes, we are working on jsonb support.

One major advantage of hstore over json/jsonb at the moment is data
manipulation, which could make json/jsonb a non-starter for some.

For example, in hstore one can do:

-- remove a key/value pair by key
UPDATE mytable SET hcolumn = hcolumn - 'mykey'::text;

or:

-- remove a key/value pair by key/value
UPDATE mytable SET hcolumn = hcolumn - 'mykey=myvalue'::hstore;

or:

-- add/replace a key/value pair
UPDATE mytable SET hcolumn = hcolumn || 'newkey=newvalue'::hstore;


You can't do something like that with json/jsonb at the moment, and
that's not going to be possible in the upcoming version either.  You'd
probably have to resort to application-side modification, or use
something like pl/v8.

-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hstore - jsonb

2014-04-04 Thread Thom Brown
On 4 April 2014 16:15, Oleg Bartunov obartu...@gmail.com wrote:
 We'll work on contrib/jsonxtra with all operators ported from hstore
 and release it after 9.4 as separate extension.

That would be useful. :)

Would there be an aim of getting that in-core for 9.5?

-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi-dimensional arrays

2014-03-15 Thread Thom Brown
On 15 March 2014 12:51, Raymond O'Donnell r...@iol.ie wrote:
 Hello all,

 Here's an odd one (to me anyway) which I ran into today if I have a
 multidimensional array, why does the following return NULL?

 select (array[['abc','def'], ['ghi','jkl']])[1]

 I would have expected it to return {abc, def}. This, however, returns
 'abc' as expected:

 select (array[['abc','def'], ['ghi','jkl']])[1][1]

Hmm... I'm not sure.  You can get to it with slices for some reason:

#  select (array[['abc','def'], ['ghi','jkl']])[0:1];
array
-
 {{abc,def}}
(1 row)

# select (array[['abc','def'], ['ghi','jkl']])[1:2];
 array
---
 {{abc,def},{ghi,jkl}}
(1 row)

# select (array[['abc','def'], ['ghi','jkl']])[2:3];
array
-
 {{ghi,jkl}}
(1 row)

-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi-dimensional arrays

2014-03-15 Thread Thom Brown
On 15 March 2014 16:21, Tom Lane t...@sss.pgh.pa.us wrote:
 Raymond O'Donnell r...@iol.ie writes:
 True... though that gives you a 2D array, whereas I was hoping for a 1D
 array from (array[...])[1].

 Postgres does not think of multi-D arrays as being arrays of arrays.
 This is problematic mainly because the SQL standard does think of them
 that way.  I'm not sure if there's any hope of changing it though ---
 there's probably too much code that would be broken if we did.

Could we potentially introduce a GUC to get it to behave in a less
surprising way, and eventually make it the default?

-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.11 - many backends in semtimedop syscall

2014-03-10 Thread Thom Brown
On 10 March 2014 15:32, hubert depesz lubaczewski dep...@depesz.com wrote:
 On Thu, Mar 06, 2014 at 06:03:54PM +0100, hubert depesz lubaczewski wrote:
 On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
  hubert depesz lubaczewski dep...@depesz.com writes:
   I didn't have a chance to do it. Can try if there is a way to get trace
   *without* making core (sorry, my c/gdb knowledge is very, very limited).
 
  Sure, you just attach to the process:
 
  $ gdb /path/to/postgres PID-of-process
  gdb bt
  gdb quit
 
  This is usually preferable to forcing a core dump.

 Thank you. If the problem will strike again, I will do it on all (or
 most, depending how fast I can make it) backends.

 The problem did happen again, and we were able to find a fix (I think).
 For some reason we had a table with over 5 (yes, 50 thousand)
 indexes on it. This table was a bucardo internals table, so maybe it was
 something in bucardo (we are using it to migrate hundreds of tables to
 another machine, so maybe it has something to do with it.

This sort of thing is the reason why I'd want to see index maintenance
nodes in explain (analyse) plans, so that it's possible to gauge their
contribution to the overall duration of a DML statement.
-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mysterious DB reset

2014-03-05 Thread Thom Brown
On 5 March 2014 18:22, Israel Brewster isr...@eraalaska.net wrote:

 I have a Postgresql 9.2.4 database containing real-time tracking data for
 our aircraft for the past week (theoretically). It is populated by two
 different processes: one that runs every few minutes, retrieving data from
 a number of sources and storing it in the DB, and one that has an always
 on connection to the DB streaming data into the database in realtime
 (often several records per second). To keep the database size manageable I
 have a cron job that runs every evening to delete all records that are more
 than a week old, after archiving a subset of them in permanent storage.

 This morning my boss e-mailed me, complaining that only a couple of
 aircraft were showing up in the list (SELECT distinct(tail) FROM data being
 the command that populates the list). Upon looking at the data I saw that
 it only went back to 4am this morning, rather than the week I was
 expecting. My first thought was Oh, I must have a typo in my cleanup
 routine, such that it is deleting all records rather than only those a week
 old, and it's just that no one has noticed until now. So I looked at that,
 but changing the delete to a select appeared to produce the proper results,
 in that no records were selected:

 DELETE FROM data WHERE pointtimenow() AT TIME ZONE 'UTC'-interval '7
 days';

 Then I noticed something even more odd. My database has an id column,
 which is defined as a SERIAL. As we all know, a serial is a monotonically
 increasing number that is not affected by deletes. However, the oldest
 record in my database, from 4am this morning, had an id of 1. Even though I
 KNOW there was data in the system yesterday. Even if my DELETE command was
 wrong and deleted ALL records, that shouldn't have reset the SERIAL column
 to 1! I also know that I have not been in the database mucking around with
 the sequence value - to be completely honest, I don't even know the exact
 command to reset it - I'd have to google it if I wanted to.

 Also odd is that my cleanup script runs at 1am. I have records of there
 being new data in the database up to 3:51am, but the oldest record
 currently in the DB is from 4:45am (as specified by the default of now() on
 the column). So I know records were added after my delete command ran, but
 before this reset occurred.

 So my question is, aside from someone going in and mucking about in the
 wee hours of the morning, what could possibly cause this behavior? What
 sort of event could cause all data to be deleted from the table, and the
 sequence to be reset? Especially while there is an active connection?
 Thanks for any ideas, however wild or off the wall :-)


That is odd.  Even if it were an unlogged table, and there was a crash, the
sequence wouldn't reset, and even if it was running in a very long-running
transaction held open by a buggy connection pooler, the sequence would
still progress as it's immune to the effects of transactions.

So if all the data went missing, and the sequence reset, the only thing I
can think of is:

Someone ran:

TRUNCATE data RESTART IDENTITY;

or someone restored the table structure from a backup that deleted the
original table.

Do you log DDL?

Was the table partitioned?

You should also really be on 9.2.7, although I can't think of any bug
that's been fixed which could be responsible for this issue.

-- 
Thom


Re: [GENERAL] How can I use the crosstab() function?

2014-02-25 Thread Thom Brown
On 25 February 2014 23:30, Rob Richardson rdrichard...@rad-con.com wrote:

  Hello!



 I am trying to use the crosstab() function in PostgreSQL 9.0 under Windows
 7.  My table has three columns: a timestamp, a tag name and a tag value.  I
 am trying to generate a table that has one column for every distinct value
 in the tag name field.  Each row of the crosstab table will have the
 timestamp, plus values for every tag name that had a value recorded at that
 time.  Of course, many of the fields in each row will be null.



 It seems to me that the crosstab() function will do exactly what I want,
 but I can't get it to work.  If I try:

 select * from

 crosstab('select recorded_date, tag_name, value from plc_values')

 as ct(recorded_date timestamp with time zone, tag_name text, tag_value
 text);

 I get function crosstab(unknown) does not exist.



 According to documentation, I think I need to add the tablefunc extension
 first.  But when I run

 create extension tablefunc;

 I get a syntax error pointing to the word extension.



 What haven't I done?


Hi Rob,

Extensions weren't introduced until PostgreSQL 9.1, so you'll have to
install it the old-fashioned way.  See the bottom of 9.0's page on contrib
modules for what the method is:

http://www.postgresql.org/docs/9.0/static/contrib.html

-- 
Thom


Re: [GENERAL] postgres-fdw questions

2014-01-24 Thread Thom Brown
On 24 January 2014 09:20, Emmanuel Medernach meder...@clermont.in2p3.fr wrote:
 Hello,

 I'm currently testing postgres_fdw feature on PostgreSQL 9.3.2 and I have
 some questions:

  - What are the limits to the number of foreign tables ?

As far as I'm aware, there isn't one.

  - What is the current status about foreign joins push-down ?

WHERE conditions are pushed down, but I don't think joins or aggregates are.

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unix domain socket inconsistency

2013-12-22 Thread Thom Brown
Hi all,

I'm a bit confused by my development set up.  I can connect to
PostgreSQL using unix domain sockets by not specifying any host with
psql, and the same applies to vacuumdb, createdb and dropdb.  However,
when I go to use pgbench, it seems to be looking in the wrong place
for the domain socket:

thom@swift ~/Development/postgresql $ psql postgres
psql (9.4devel)
Type help for help.

postgres=# \q
thom@swift ~/Development/postgresql $ createdb pgbench
thom@swift ~/Development/postgresql $ pgbench -i -s 20 pgbench
Connection to database pgbench failed:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket /var/run/postgresql/.s.PGSQL.5488?

The actual location is /tmp/.s.PGSQL.5488:

thom@swift ~/Development/postgresql $ pg_ctl stop
waiting for server to shut downLOG:  received smart shutdown request
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
done
server stopped
thom@swift ~/Development/postgresql $ psql postgres
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket /tmp/.s.PGSQL.5488?

What am I missing?

-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unix domain socket inconsistency

2013-12-22 Thread Thom Brown
On 23 December 2013 01:13, Adrian Klaver adrian.kla...@gmail.com wrote:
 On 12/22/2013 04:51 PM, Thom Brown wrote:

 PostgreSQL using unix domain sockets by not specifying any host with
 psql, and the same applies to vacuumdb, createdb and dropdb.  However,
 when I go to use pgbench, it seems to be looking in the wrong place
 for the domain socket:

 thom@swift ~/Development/postgresql $ psql postgres
 psql (9.4devel)
 Type help for help.

 postgres=# \q
 thom@swift ~/Development/postgresql $ createdb pgbench
 thom@swift ~/Development/postgresql $ pgbench -i -s 20 pgbench
 Connection to database pgbench failed:
 could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket /var/run/postgresql/.s.PGSQL.5488?

 The actual location is /tmp/.s.PGSQL.5488:

 thom@swift ~/Development/postgresql $ pg_ctl stop
 waiting for server to shut downLOG:  received smart shutdown request
 LOG:  autovacuum launcher shutting down
 LOG:  shutting down
 LOG:  database system is shut down
 done
 server stopped
 thom@swift ~/Development/postgresql $ psql postgres
 psql: could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket /tmp/.s.PGSQL.5488?

 What am I missing?


 Best guess pgbench is being run from a different install then rest of your
 Postgres set up. Might want to check whether you have more than one copy of
 pgbench and if so where they are in PATH.

I should have mentioned that I've checked that:

thom@swift ~/Development $ which psql pgbench
/home/thom/Development/psql/bin//psql
/home/thom/Development/psql/bin//pgbench

-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unix domain socket inconsistency

2013-12-22 Thread Thom Brown
On 23 December 2013 01:15, Thom Brown t...@linux.com wrote:
 On 23 December 2013 01:13, Adrian Klaver adrian.kla...@gmail.com wrote:
 On 12/22/2013 04:51 PM, Thom Brown wrote:

 PostgreSQL using unix domain sockets by not specifying any host with
 psql, and the same applies to vacuumdb, createdb and dropdb.  However,
 when I go to use pgbench, it seems to be looking in the wrong place
 for the domain socket:

 thom@swift ~/Development/postgresql $ psql postgres
 psql (9.4devel)
 Type help for help.

 postgres=# \q
 thom@swift ~/Development/postgresql $ createdb pgbench
 thom@swift ~/Development/postgresql $ pgbench -i -s 20 pgbench
 Connection to database pgbench failed:
 could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket /var/run/postgresql/.s.PGSQL.5488?

 The actual location is /tmp/.s.PGSQL.5488:

 thom@swift ~/Development/postgresql $ pg_ctl stop
 waiting for server to shut downLOG:  received smart shutdown request
 LOG:  autovacuum launcher shutting down
 LOG:  shutting down
 LOG:  database system is shut down
 done
 server stopped
 thom@swift ~/Development/postgresql $ psql postgres
 psql: could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket /tmp/.s.PGSQL.5488?

 What am I missing?


 Best guess pgbench is being run from a different install then rest of your
 Postgres set up. Might want to check whether you have more than one copy of
 pgbench and if so where they are in PATH.

 I should have mentioned that I've checked that:

 thom@swift ~/Development $ which psql pgbench
 /home/thom/Development/psql/bin//psql
 /home/thom/Development/psql/bin//pgbench

Hmm... now this is odd.  I went to correct the PATH information in my
.bashrc script file (to remove the superfluous trailing
forward-slash), and when running both psql and pgbench in a new
terminal, all is fine:

thom@swift ~/Development $ which psql pgbench
/home/thom/Development/psql/bin/psql
/home/thom/Development/psql/bin/pgbench
thom@swift ~/Development $ psql postgres
psql (9.4devel)
Type help for help.

postgres=# \q
thom@swift ~/Development $ pgbench -i -s 20 pgbench
NOTICE:  table pgbench_history does not exist, skipping
NOTICE:  table pgbench_tellers does not exist, skipping
NOTICE:  table pgbench_accounts does not exist, skipping
NOTICE:  table pgbench_branches does not exist, skipping
creating tables...
10 of 200 tuples (5%) done (elapsed 0.10 s, remaining 1.86 s).
20 of 200 tuples (10%) done (elapsed 0.19 s, remaining 1.72 s).
...

So it's all working now, but I don't know why.  I don't think that
change would be the thing that fixed it.  Perhaps it's something to do
with me attempting to run pgbench before building the development
contrib module of it, so it picks up the OS package-installed version,
and when I go to build the local dev version, it has cached the
location for the domain socket from the previous attempt.  *shrug*

-- 
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign Key violated

2013-05-29 Thread Thom Brown
On 23 May 2013 15:33, Thom Brown t...@linux.com wrote:
 On 23 May 2013 10:15, Keith Fiske ke...@omniti.com wrote:
 Client reported an issue where it appears a foreign key has been violated

 prod=#\d rma_items
 [snip]
 rma_items_rma_id_status_fk FOREIGN KEY (rma_id, rma_status) REFERENCES
 rmas(id, status) ON UPDATE CASCADE ON DELETE CASCADE

 prod=# select i.rma_id, i.rma_status, r.id, r.status from rmas r join
 rma_items i on i.rma_id = r.id and i.rma_status != r.status;
rma_id   | rma_status | id | status
 +++
  1008122437 | r  | 1008122437 | c
 (1 row)


 Attempting to reinsert this data again causes a violation error, so it
 doesn't appear to be broken

 prod=# begin;
 BEGIN
 prod=# insert into rma_items (rma_id, order_item_id, return_reason_id,
 rma_status) values (1008122437, 1007674099, 9797623, 'r');
 ERROR:  insert or update on table rma_items violates foreign key
 constraint rma_items_rma_id_status_fk
 DETAIL:  Key (rma_id, rma_status)=(1008122437, r) is not present in table
 rmas.
 prod=# rollback;
 ROLLBACK

 This is running 9.2.4 on CentOS. If anyone can suggest how I can look into
 this deeper and find what the problem may be, I'd appreciate it. I'm here at
 PGCon if anyone is available to help IRL as well

 What do you get with:

 SELECT conname
 FROM pg_constraint
 WHERE NOT convalidated;

Did you resolve this?

--
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign Key violated

2013-05-23 Thread Thom Brown
On 23 May 2013 10:15, Keith Fiske ke...@omniti.com wrote:
 Client reported an issue where it appears a foreign key has been violated

 prod=#\d rma_items
 [snip]
 rma_items_rma_id_status_fk FOREIGN KEY (rma_id, rma_status) REFERENCES
 rmas(id, status) ON UPDATE CASCADE ON DELETE CASCADE

 prod=# select i.rma_id, i.rma_status, r.id, r.status from rmas r join
 rma_items i on i.rma_id = r.id and i.rma_status != r.status;
rma_id   | rma_status | id | status
 +++
  1008122437 | r  | 1008122437 | c
 (1 row)


 Attempting to reinsert this data again causes a violation error, so it
 doesn't appear to be broken

 prod=# begin;
 BEGIN
 prod=# insert into rma_items (rma_id, order_item_id, return_reason_id,
 rma_status) values (1008122437, 1007674099, 9797623, 'r');
 ERROR:  insert or update on table rma_items violates foreign key
 constraint rma_items_rma_id_status_fk
 DETAIL:  Key (rma_id, rma_status)=(1008122437, r) is not present in table
 rmas.
 prod=# rollback;
 ROLLBACK

 This is running 9.2.4 on CentOS. If anyone can suggest how I can look into
 this deeper and find what the problem may be, I'd appreciate it. I'm here at
 PGCon if anyone is available to help IRL as well

What do you get with:

SELECT conname
FROM pg_constraint
WHERE NOT convalidated;

--
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unlogged indexes

2013-05-03 Thread Thom Brown
On 3 May 2013 21:06, Yang Zhang yanghates...@gmail.com wrote:
 Guessing the answer's no, but is there any way to construct indexes
 such that I can safely put them on (faster) volatile storage? (Just to
 be clear, I'm asking about indexes for *logged* tables.)

Yes:

CREATE INDEX ... TABLESPACE tablespacename;
ALTER INDEX ... SET TABLESPACE tablespacename;

Although there's a disparity between your email subject and main text.
 Indexes for logged tables are always logged.  If you want an unlogged
index you can only create it for an unlogged table.

And putting indexes on a separate tablespace is probably not as
advantageous as you're thinking.  Might be worth testing.

--
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE using 3 medium-sized tables causes runaway hash table and fills disk

2013-04-29 Thread Thom Brown
On 26 April 2013 15:39, Rowan Collins rowan.coll...@gmail.com wrote:
 Hi All,

 I've come upon some very strange behaviour with an UPDATE query which causes
 Postgres to consume all the disk space on the server for no apparent reason.

 Basically, I'm trying to run an UPDATE involving three medium-sized tables
 (~60k rows each), which generates a query plan with three Hash Joins. But
 when I try to execute it, Postgres appears to go into some kind of loop,
 gradually filling up the disk partition. After a long wait it responds with
 ERROR: could not write to hash-join temporary file: No space left on
 device; SQL state: 53100; the partition in question has 9.5GB free at the
 beginning of the query - that's a lot of hash file!

 If I run ANALYZE temp_fares_mappings; - the table which is being Updated,
 and is the outermost in the query plan - the problem goes away *even though
 the Query Plan hasn't changed*.

 I'm not entirely sure how to simplify the query and still reproduce the
 issue, but I've produced an anonymized data sample and SQL query at
 http://rwec.co.uk/x/break_postgres.zip In this case, Analyze *does* change
 the query plan (as well as fixing the issue), but I can consistently
 reproduce the disk-filling behaviour using this sample on Postgres 9.0 and
 9.2.

 Note that it doesn't appear to matter if one or all of the tables are
 permanent, as I have been using various combinations for debugging, and
 always get the same behaviour. Trying to write a corresponding SELECT
 statement doesn't seem to generate the same issue, at least with the queries
 I've tried.

 - The plan for the real query is here: http://explain.depesz.com/s/WTP
 - Sample data and SQL query: http://rwec.co.uk/x/break_postgres.zip
 - Query plan for sample data, without running Analyze, is here:
 http://explain.depesz.com/s/qsH
 - Postgres version strings: PostgreSQL 9.0.5 on x86_64-pc-linux-gnu,
 compiled by GCC x86_64-pc-linux-gnu-gcc (Gentoo 4.5.2 p1.1, pie-0.4.5)
 4.5.2, 64-bit and PostgreSQL 9.2.1 on x86_64-pc-linux-gnu, compiled by
 x86_64-pc-linux-gnu-gcc (Gentoo Hardened 4.5.4 p1.0, pie-0.4.7) 4.5.4,
 64-bit

 Does anyone have any idea what's going on here, and whether it is in fact a
 bug? It certainly feels like one...
 Thanks for your time.

You done gone broke Postgres.  The same occurs when converting your
UPDATE query into a SELECT when I try it:

SELECT
  *
FROM
  test_mappings AS LFM, test_low_fares AS LF
JOIN
  test_cob_offers AS CO
  ON
CO.num_adults_searched   = LF.number_in_party
  AND
CO.num_children_searched = 0
  AND
CO.num_infants_searched  = 0
WHERE
  LFM.low_fares_row_id = LF.low_fares_row_id
AND
  CO.central_offer_id  = LFM.central_offer_id
AND
  CO.credential_set_id = LFM.credential_set_id
AND
  LFM.cob_offer_id IS NULL;

Well something appears to be tangling up the executor.
--
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regex help wanted

2013-04-25 Thread Thom Brown
On 25 April 2013 15:32, Tom Lane t...@sss.pgh.pa.us wrote:
 Karsten Hilbert karsten.hilb...@gmx.net writes:
 What I don't understand is: Why does the following return a
 substring ?

   select substring ('junk $allergy::test::99$ junk' from 
 '\$[^]+?::[^:]+?\$');

 There's a perfectly valid match in which [^]+? matches allergy::test
 and [^:]+? matches 99.

Yeah, I think there may be an assumption that a lazy quantifier will
stop short and cause the remainder to fail to match permanently, but
it will backtrack, forcing the lazy quantifier to expand until it can
match the expression.

--
Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Grant problem and how to prevent users to execute OS commands?

2012-08-21 Thread Thom Brown
On 20 August 2012 19:34, Evil evilofreve...@hotmail.com wrote:
 Hello List,
 First time here also beginner to Postgres.So please forgive me for any
 mistakes.
 I'm pretty sure i have same problem.=
 http://archives.postgresql.org/pgsql-admin/2012-03/msg00105.php
 (After searching it i found it)
 However it is not solution for me.:( *I'm pretty sure i'm doing something in
 wrong manner*.
 After issusing that revoke from public my postgres user still able to
 connect to any database.
 More over
  when executing \l user is able to see complete database names.

 So i have 2 questions:
 1 ) How i can grant my user(s) to connect only to *granted* database not
 *any*
 2 ) Users still able to execute OS (operation system) commands on system.
 This is a big security risk.How i can prevent it too.

 Any recommendations,manuals,helps,hints,RTFM :P are welcome;)

The postgres user is a database superuser.  Trying to prevent it from
connecting to databases is understandably impossible using the GRANT
and REVOKE system, but no end-user should ever connect to the database
cluster as a superuser.  Any operating system commands issued via
unsafe procedural languages are only run as the user the database
instance is running as, typically the user postgres on the OS, so it
has limited permissions by default.

But here's an example of how to prevent a typical user from connecting
to a database:

postgres=# create database test;
CREATE DATABASE
postgres=# create user test;
CREATE ROLE
postgres=# \c test test
You are now connected to database test as user test.
test= \c postgres postgres
You are now connected to database postgres as user postgres.
postgres=# revoke connect on database test from public, test;
REVOKE
postgres=# \c test test
FATAL:  permission denied for database test
DETAIL:  User does not have CONNECT privilege.
Previous connection kept

You can also set up further connection rules in pg_hba.conf:
http://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html

It will even allow you to prevent database superusers from logging in.

Regards

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reading storage parameters

2012-06-21 Thread Thom Brown
On 21 June 2012 13:12, Daniele Varrazzo daniele.varra...@gmail.com wrote:
 Hello,

 is there a way to read the storage parameters values set on a table
 (i.e. what set with ALTER TABLE foo SET (autovacuum_enabled=false) and
 so on...)? I can't find it in the docs.

SELECT c.reloptions
FROM pg_class c
INNER JOIN pg_namespace n
  ON c.relnamespace = n.oid
WHERE c.relname = 'tablename'
AND n.nspname = 'schemaname';

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having trouble with pg_dumpall -o

2012-05-01 Thread Thom Brown
On 1 May 2012 11:12, Matthew Churcher matthew.churc...@realvnc.com wrote:
 Hi PostgreSQL users,

 I'm having difficulty migrating a postgres 8.4.11 database to postgres
 9.1.2, neither of the included pg_dumpall tools appear to honour the -o or
 --oids options and fail to dump the table oids from the old database as we
 require.

 I've tried various combinations and orders of options including those
 below:-
 pg_dumpall -o -p 5432 -U postgres  dump_o.sql
 pg_dumpall --oids -p 5432 -U postgres  dump_o.sql
 pg_dumpall -p 5432 -U postgres -o   dump_o.sql
 pg_dumpall -p 5432 -U postgres --oids   dump_o.sql

I can confirm that pg_dumpall's -o option doesn't appear to work, but
pg_dump's -o does.  This looks like a bug to me, especially since it
also doesn't produce the WITH OIDS option on the CREATE TABLE
statement for tables which do have OIDs.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having trouble with pg_dumpall -o

2012-05-01 Thread Thom Brown
On 1 May 2012 11:55, Matthew Churcher matthew.churc...@realvnc.com wrote:
 Thanks Thom, that's really useful to know however  I've been unable to get
 it working with pg_dump either. Are you able to offer any insight there?
 What command line options are  you using?

 I get the same result with:
 pg_dump -o mydatabase
 pg_dump  mydatabase

Can you confirm your tables actually use OIDs?  By default they don't.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having trouble with pg_dumpall -o

2012-05-01 Thread Thom Brown
On 1 May 2012 11:22, Thom Brown t...@linux.com wrote:
 On 1 May 2012 11:12, Matthew Churcher matthew.churc...@realvnc.com wrote:
 Hi PostgreSQL users,

 I'm having difficulty migrating a postgres 8.4.11 database to postgres
 9.1.2, neither of the included pg_dumpall tools appear to honour the -o or
 --oids options and fail to dump the table oids from the old database as we
 require.

 I've tried various combinations and orders of options including those
 below:-
 pg_dumpall -o -p 5432 -U postgres  dump_o.sql
 pg_dumpall --oids -p 5432 -U postgres  dump_o.sql
 pg_dumpall -p 5432 -U postgres -o   dump_o.sql
 pg_dumpall -p 5432 -U postgres --oids   dump_o.sql

 I can confirm that pg_dumpall's -o option doesn't appear to work, but
 pg_dump's -o does.  This looks like a bug to me, especially since it
 also doesn't produce the WITH OIDS option on the CREATE TABLE
 statement for tables which do have OIDs.

Actually I am mistaken here.  I was looking at the same table in a
different database in the dump.  The -o option works fine in
pg_dumpall.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having trouble with pg_dumpall -o

2012-05-01 Thread Thom Brown
On 1 May 2012 12:37, Matthew Churcher matthew.churc...@realvnc.com wrote:
 OK, I think I've worked out what's going on. I've got my wires crossed
 between table column OIDS (deprecated) and the OID which uniquely identifies
 each table (?always enabled?).

 We're not using OID for each column, only to reference the tables themselves
 as that's how triggers are referring to them.

 It appears the -o flag is for migrating table column oids which we're not
 using.

 So... any ideas how we can migrate the OID of the table itself? Or are we
 doing something we shouldn't?

You shouldn't be relying on OIDs as they're not really for the
end-user.  Instead you should refer to objects by name.  How are you
using them in triggers?
-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how robust are custom dumps?

2012-04-24 Thread Thom Brown
On 24 April 2012 16:17, Willy-Bas Loos willy...@gmail.com wrote:
 Hi,

 Some 6 years ago, i had a bad experience with a custom dump. It wouldn't
 restore and my data was lost.

What was the experience?  Is it possible you had specified a
compression level without the format set to custom?  That would result
in a plain text output within a gzip file, which would then error out
if you tried to restore it with pg_restore, but would be perfectly
valid if you passed the uncompressed output directly into psql.

How many times had you experienced the problem at the time?  Was is
repeatedly or just the one time?

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Chaining inserts ... This would be cool

2012-04-23 Thread Thom Brown
On 23 April 2012 21:49, Nick Apperson apper...@gmail.com wrote:
 There are obviously workarounds for this, but I'm wondering why the
 following query shouldn't work. It seems like it should. With MVCC already
 present on the back-end, I can't see any reason other than additional
 parsing routines that this couldn't work:

 INSERT INTO old_login_id_to_new_account_id(new_account_id, old_login_id)
 INSERT INTO accounts(id, username, password_hash, email) SELECT DEFAULT,
 username, password_hash, email FROM logins_old RETURNING id, logins_old.id;

 Anyway, I'm sure there are more important features for Postgres (like
 upserts, unique indexes on GIN, Gist and hash, fixed arrays, compact storage
 of enum arrays as bitfields, etc.) I just thought it was an interesting
 idea.

You should be able to use writeable common table expressions to
achieve a linking behaviour.

http://www.postgresql.org/docs/9.1/static/queries-with.html
http://www.depesz.com/index.php/2011/03/16/waiting-for-9-1-writable-cte/
http://thombrown.blogspot.de/2011/11/writeable-common-table-expressions.html

But I'm not sure the query you posted makes any sense.  Why would a
SELECT statement have a RETURNING clause?  And where do the values for
the first INSERT come from?
-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unaccent characters

2012-04-20 Thread Thom Brown
Hi,

I had a look at the unaccent.rules file and noticed the following
characters aren't properly converted:

ß (U+00DF)  An eszett represents a double-s SS but this replaces it
with one S.  Shouldn't this be replace with SS?

Æ (U+00C6) and æ (U+00E6) These doesn't have an accent, diacritic or
anything added to a single latin character.  It's simply a ligature of
A and E or a and e.  If someone has the text æther, I would
imagine they'd be surprised at it being converted to ather instead
of aether.

Œ (U+0152) and œ (U+0153). Same as above.  This is a ligature of O
and E or o and e.  Except this time the unaccent module chooses
the 2nd character instead of the 1st which is confusing.

If these were properly converted it would change the length of the
text, so I'm wondering if that's the reason for not properly
converting them.  Could someone elaborate?

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] oracle linux

2012-03-28 Thread Thom Brown
On 28 March 2012 16:30, Tom Lane t...@sss.pgh.pa.us wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 On 28 Březen 2012, 16:38, Gregg Jaskiewicz wrote:
 They seem to claim up to 70% speed gain.
 Did anyone proved it, tested it - with PostgreSQL in particular ?

 I really don't expect such difference just due to switching to a different
 kernel. There's a space for infinite number of tweaks there (using a
 different default fs parameters, adding better support for the new Niagara
 T4 CPU not available to RedHat yet etc.).

 AFAIK, Oracle Linux is still just rebranded RHEL, with some very minimal
 amount of additional engineering effort put in.  It's not likely that
 they are so much smarter than everybody else who works on Linux that
 they can find huge across-the-board speedups that nobody else has found.

Reminds me of when Oracle claimed a 70x speed increase in MySQL
cluster (Delivers up to 70x More Performance for Complex Queries),
and the ability to process a billion queries per minute.  Upon closer
inspection, the tables used in the billion tables benchmark were all
in-memory tables with no joins and distributed across 8 servers.  And
the increases over the previous version weren't fair either because
the tests were using different hardware *and* one of them was
virtualised.  They also didn't appear to want to disclose any further
details of the hardware differences.

So basically setting up unrealistic scenarios to get the highest
hype-making numbers, and ensuring the important context of those
numbers is in the footnotes somewhere.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] language name case sensitivity, also known as plpgsql 'PLpgSQL'

2012-03-26 Thread Thom Brown
On 26 March 2012 16:30, Gregg Jaskiewicz gryz...@gmail.com wrote:
 Folks,
 I'm testing some code on 9.2dev (trunk), and I've noticed that
 postgresql seems to be fussy about language case when creating a
 function.
 So for instance:
 create function foo() returns int AS $$ BEGIN return 1; END; $$
 LANGUAGE 'PLpgSQL';

 Will be fine on 8.3 (my current version used in product), but not so
 fine when using 9.2dev.

 I think this is obviously a regression. What you say ?

Probably something to do with this:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=67dc4eed42186ba6a2456578899bfd38d003201a

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] broken xlog - recovery plan check

2012-03-24 Thread Thom Brown
On 24 March 2012 00:45, Colin Taylor colin.tay...@gmail.com wrote:
 Hi I seem to have an 8.3.9 database with a broken xlog,

 PANIC:  heap_insert_redo: invalid max offset number

 My plan is to run pg_resetxlog.
 Hopefully it then starts up.
 Test recent data as thoroughly as possible - (script some Select * ' s?)
 If ok - curse ops and their raid caches
 If not - curse ops and tell them to recover from backup (v. large and
 therefore very slow process).

 Can anyone give me feedback on this plan?

Yes, it's almost certainly corrupted.  How old is the backup?  I ask
this because if you use pg_resetxlog, it would be a good idea to dump
and restore the database once you get it up and running anyway.  This
is because you can't trust that your database will be consistent.  I
guess technically it *might* be fine, but you wouldn't know this
unless you went through verifying all your data made sense from a
referential integrity perspective.  So it will be a trade-off between
one of:

- restore from an existing backup, losing the data since you last backed up
- doing a dump/restore after resetting xlog to ensure your database is
consistent
- running full checks once you've got your database up and running (or
ignore it and possibly find weird problems later)

Also, PostgreSQL 8.3.9 is over 2 years out of date.  I'd recommend
bringing it up to 8.3.18 to take advantage of the hundreds of bug
fixes that have since gone in.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
Hi all,

After building Postgres and trying an initdb, I'm getting the following:


thom@swift:~/Development$ initdb
The files belonging to this database system will be owned by user thom.
This user must also own the server process.

The database cluster will be initialized with locale en_GB.UTF-8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to english.

fixing permissions on existing directory /home/thom/Development/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers ... 400kB
creating configuration files ... ok
creating template1 database in /home/thom/Development/data/base/1 ...
FATAL:  could not remove old lock file postmaster.pid: No such file
or directory
HINT:  The file seems accidentally left over, but it could not be
removed. Please remove the file by hand and try again.
child process exited with exit code 1
initdb: removing contents of data directory /home/thom/Development/data


It can't remove an old lock file due to it not existing, but the hint
says it was left over but couldn't be removed.  The hint contradicts
the error message.  There is nothing in the data directory at all
before trying this, and nothing after.  Repeating initdb yields the
same result.

But, if I rename the data directory to something else and mkdir data
again, all is well.  I can make it break again by removing the new
data directory and renaming the old one back to data, still completely
empty.  Note that throughout all of this, Postgres is running, but as
a separate user and using completely separate directories, since it's
the standard packaged version on Debian.

Can anyone suggest what is wrong here?

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 16:02, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 thom@swift:~/Development$ initdb
 The files belonging to this database system will be owned by user thom.
 This user must also own the server process.

 The database cluster will be initialized with locale en_GB.UTF-8.
 The default database encoding has accordingly been set to UTF8.
 The default text search configuration will be set to english.

 fixing permissions on existing directory /home/thom/Development/data ... ok
 creating subdirectories ... ok
 selecting default max_connections ... 10
 selecting default shared_buffers ... 400kB
 creating configuration files ... ok
 creating template1 database in /home/thom/Development/data/base/1 ...
 FATAL:  could not remove old lock file postmaster.pid: No such file
 or directory
 HINT:  The file seems accidentally left over, but it could not be
 removed. Please remove the file by hand and try again.
 child process exited with exit code 1
 initdb: removing contents of data directory /home/thom/Development/data

 Um ... I assume this is some patched version rather than pristine
 sources?  It's pretty hard to explain why it's falling over like that.

No, I did a git stash, git clean -f and git pull before trying to build.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 16:04, Adrian Klaver adrian.kla...@gmail.com wrote:
 The postmaster.pid is located outside the data directory, but points back to 
 the
 data directory.   Not sure where Debian, though at a guess somewhere in /var.
 Any way search for postmaster.pid.

I'm not sure, because if I use a new data directory, initdb it and
start the service, the postmaster.pid appears in it, and not as a
symbolic link.

I did a search for postmaster.pid in the whole of /var and it only
shows up /var/lib/postgresql/9.1/main/postmaster.pid

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 16:11, Thom Brown t...@linux.com wrote:
 On 6 March 2012 16:04, Adrian Klaver adrian.kla...@gmail.com wrote:
 The postmaster.pid is located outside the data directory, but points back to 
 the
 data directory.   Not sure where Debian, though at a guess somewhere in /var.
 Any way search for postmaster.pid.

 I'm not sure, because if I use a new data directory, initdb it and
 start the service, the postmaster.pid appears in it, and not as a
 symbolic link.

 I did a search for postmaster.pid in the whole of /var and it only
 shows up /var/lib/postgresql/9.1/main/postmaster.pid

Correction, this is Ubuntu, not Debian.  11.10 if it's of any consequence.

The file system is ext4 with
rw,noatime,nodiratime,errors=remount-ro,commit=0 on a Crucial m4 SSD.

ecryptfs is in use in the parent directory.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 16:18, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Tuesday, March 06, 2012 8:11:20 am Thom Brown wrote:
 On 6 March 2012 16:04, Adrian Klaver adrian.kla...@gmail.com wrote:
  The postmaster.pid is located outside the data directory, but points back
  to the data directory.   Not sure where Debian, though at a guess
  somewhere in /var. Any way search for postmaster.pid.

 I'm not sure, because if I use a new data directory, initdb it and
 start the service, the postmaster.pid appears in it, and not as a
 symbolic link.

 I did a search for postmaster.pid in the whole of /var and it only
 shows up /var/lib/postgresql/9.1/main/postmaster.pid


 My guess is if you open that file you will find it points back to the old
 directory.  So are you  still running the Debian packaged version of Postgres?
 Or in other words does a ps show any other postmasters running other than the
 new one you built?

No, only the ones running as the postgres user.

Here's the contents of the pid file in /var/lib/postgresql/9.1/main/

1199
/var/lib/postgresql/9.1/main
1330883367
5432
/var/run/postgresql
localhost
  5432001 0

And if I start my development copy, this is the content of its postmaster.pid:

27061
/home/thom/Development/data
1331050950
5488
/tmp
localhost
  5488001 191365126

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 16:31, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 On 6 March 2012 16:02, Tom Lane t...@sss.pgh.pa.us wrote:
 Um ... I assume this is some patched version rather than pristine
 sources?  It's pretty hard to explain why it's falling over like that.

 No, I did a git stash, git clean -f and git pull before trying to 
 build.

 [ scratches head... ]  I can't reproduce it with current git tip.

And I don't think I can reproduce this if I remove that directory.
I've seen this issue about 3 or 4 times in the past, and fixed it by
ditching the old data dir completely.  I'm just not sure what causes
this to happen.

Looking back through my terminal log, one thing might lend a clue from
before I tried rebuliding it:

thom@swift:~/Development$ pg_ctl stop
waiting for server to shut downcd .postgre.s
.




^C
thom@swift:~/Development$ pg_ctl stop
pg_ctl: could not send stop signal (PID: 2807): No such process
thom@swift:~/Development$ ps -ef | grep postgres
postgres  1199 1  0 Mar04 ?00:00:01
/usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main
-c config_file=/etc/postgresql/9.1/main/postgresql.conf
postgres  1273  1199  0 Mar04 ?00:00:18 postgres: writer
process
postgres  1274  1199  0 Mar04 ?00:00:14 postgres: wal writer
process
postgres  1275  1199  0 Mar04 ?00:00:03 postgres: autovacuum
launcher process
postgres  1276  1199  0 Mar04 ?00:00:02 postgres: stats
collector process
thom 16476  4302  0 15:30 pts/100:00:00 grep --color=auto postgres


Postgres wouldn't shut down.  I had no other terminal windows using
psql, no other database client apps open, yet it stayed shutting down,
so I CTRL+C'd it and tried again.  A quick check of running processes
showed that it had stopped running. (it shows postgres running above,
but the dev copy runs as my user, not postgres)

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 16:40, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Tuesday, March 06, 2012 8:24:20 am Thom Brown wrote:


 No, only the ones running as the postgres user.

 In my original read, I missed the part you had the Ubuntu/Debian packaged
 version running.


 Here's the contents of the pid file in /var/lib/postgresql/9.1/main/

 1199
 /var/lib/postgresql/9.1/main
 1330883367
 5432
 /var/run/postgresql
 localhost
   5432001         0

 And if I start my development copy, this is the content of its
 postmaster.pid:

 27061
 /home/thom/Development/data
 1331050950
 5488
 /tmp
 localhost
   5488001 191365126

 So how are getting the file above? I thought initdb refused to init the 
 directory
 and that you could not find pid file it was referring to? Just on a hunch, 
 what is
 in /tmp?

I got the above output when I created a new data directory and initdb'd it.

/tmp shows:

 4 -rw---  1 thomthom   55 2012-03-06 16:22
.s.PGSQL.5488.lock
 0 srwxrwxrwx  1 thomthom0 2012-03-06 16:22 .s.PGSQL.5488

Once it's up and running.  These disappear after though.  When using
the old data directory again, there's no evidence of anything like
this in /tmp.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 17:00, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Tuesday, March 06, 2012 8:44:10 am Thom Brown wrote:

  And if I start my development copy, this is the content of its
  postmaster.pid:
 
  27061
  /home/thom/Development/data
  1331050950
  5488
  /tmp
  localhost
    5488001 191365126
 
  So how are getting the file above? I thought initdb refused to init the
  directory and that you could not find pid file it was referring to? Just
  on a hunch, what is in /tmp?

 I got the above output when I created a new data directory and initdb'd it.

 Still not understanding. In your original post you said
 /home/thom/Development/data was the original directory you could not initdb. 
 How
 could it also be the new directory you can initdb as indicated by the
 postmaster.pid?

/home/thom/Development/data was causing problems so:

mv data databroken
mkdir data
initdb

... working fine again.  I then used the postmaster.pid from this when
started up.  But if I do:

pg_ctl stop
rm -rf data
mv databroken data
initdb

... error messages appear again.

 From your previous post:
  thom@swift:~/Development$ pg_ctl stop
  pg_ctl: could not send stop signal (PID: 2807): No such process

 Doing the above without qualifying which version of pg_ctl you are using or 
 what
 data directory you are pointing is dangerous.  The combination of  implied
 pathing and preset env variables could lead to all sorts of mischief.

Unlikely since pg_ctl isn't available in my search path once I remove
my local development bin dir from it.  All non-client tools for the
packaged version aren't available to normal users.  Those are all in
/usr/lib/postgresql/9.1/bin.  The only ones exposed to the search path
through symbolic links are:

clusterdb
createdb
createlang
createuser
dropdb
droplang
dropuser
pg_dump
pg_dumpall
pg_restore
psql
reindexdb
vacuumdb
vacuumlo

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 17:16, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 Looking back through my terminal log, one thing might lend a clue from
 before I tried rebuliding it:

 thom@swift:~/Development$ pg_ctl stop
 waiting for server to shut downcd .postgre.s
 .
 



 ^C
 thom@swift:~/Development$ pg_ctl stop
 pg_ctl: could not send stop signal (PID: 2807): No such process
 thom@swift:~/Development$ ps -ef | grep postgres
 postgres  1199     1  0 Mar04 ?        00:00:01
 /usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main
 -c config_file=/etc/postgresql/9.1/main/postgresql.conf
 postgres  1273  1199  0 Mar04 ?        00:00:18 postgres: writer
 process
 postgres  1274  1199  0 Mar04 ?        00:00:14 postgres: wal writer
 process
 postgres  1275  1199  0 Mar04 ?        00:00:03 postgres: autovacuum
 launcher process
 postgres  1276  1199  0 Mar04 ?        00:00:02 postgres: stats
 collector process
 thom     16476  4302  0 15:30 pts/1    00:00:00 grep --color=auto postgres

 Hm.  It looks like pg_ctl found a PID file pointing to a non-existent
 process, which is a bit like what you're seeing initdb do.

 I wonder whether this is somehow caused by conflicting settings for
 PGDATA.  Do you have a setting for that in your environment, or .bashrc
 or someplace, that is different from what you're trying to use?

These are in my env output:

PATH=/home/thom/Development/psql/bin/:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
PGDATA=/home/thom/Development/data/
PGPORT=5488

This appears in my build script before configure:

export PGDATA=$HOME/Development/data/
export PATH=$HOME/Development/psql/bin/:$PATH
export PGPORT=5488

And those 3 lines also appear in my .bashrc file without any variation:

export PGDATA=$HOME/Development/data/
export PATH=$HOME/Development/psql/bin/:$PATH
export PGPORT=5488

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 17:45, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Tuesday, March 06, 2012 9:25:17 am Thom Brown wrote:


 These are in my env output:

 PATH=/home/thom/Development/psql/bin/:/usr/lib/lightdm/lightdm:/usr/local/s
 bin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
 PGDATA=/home/thom/Development/data/
 PGPORT=5488

 This appears in my build script before configure:

 export PGDATA=$HOME/Development/data/
 export PATH=$HOME/Development/psql/bin/:$PATH
 export PGPORT=5488

 And those 3 lines also appear in my .bashrc file without any variation:

 export PGDATA=$HOME/Development/data/
 export PATH=$HOME/Development/psql/bin/:$PATH
 export PGPORT=5488

 And you are sure there is no pg_ctl or initdb outside
 /usr/lib/postgresql/9.1/bin or /home/thom/Development/psql/bin and in your 
 PATH?

 Just for grins what happens if you try an initdb using an explicit reference 
 to
 the binary /home/thom/Development/psql/bin/initdb and the -D
 /home/thom/Development/data/ ?

thom@swift:~/Development$ /home/thom/Development/psql/bin/initdb -E
'UTF8' -D /home/thom/Development/data/
The files belonging to this database system will be owned by user thom.
This user must also own the server process.

The database cluster will be initialized with locale en_GB.UTF-8.
The default text search configuration will be set to english.

fixing permissions on existing directory /home/thom/Development/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers ... 400kB
creating configuration files ... ok
creating template1 database in /home/thom/Development/data/base/1 ...
FATAL:  could not remove old lock file postmaster.pid: No such file
or directory
HINT:  The file seems accidentally left over, but it could not be
removed. Please remove the file by hand and try again.
child process exited with exit code 1
initdb: removing contents of data directory /home/thom/Development/data

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 17:46, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 On 6 March 2012 16:31, Tom Lane t...@sss.pgh.pa.us wrote:
 [ scratches head... ]  I can't reproduce it with current git tip.

 And I don't think I can reproduce this if I remove that directory.
 I've seen this issue about 3 or 4 times in the past, and fixed it by
 ditching the old data dir completely.  I'm just not sure what causes
 this to happen.

 I'm a bit confused here.  Isn't the data directory totally empty before
 initdb starts?  It's supposed to refuse to proceed otherwise.

Yes, it is completely empty:

thom@swift:~/Development$ ls -la data
total 8
drwx--  2 thom thom 4096 2012-03-06 17:48 .
drwxrwxr-x 15 thom thom 4096 2012-03-06 17:46 ..

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 17:53, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 /home/thom/Development/data was causing problems so:

 mv data databroken
 mkdir data
 initdb

 ... working fine again.  I then used the postmaster.pid from this when
 started up.  But if I do:

 pg_ctl stop
 rm -rf data
 mv databroken data
 initdb

 ... error messages appear again.

 Okay, so the question becomes: what is different between databroken and
 a freshly mkdir'd empty directory?  If there is no visible difference in
 contents, ownership, or permissions, then it seems like this is evidence
 of a filesystem bug (ie, apparently empty directory acts nonempty for
 some operations).

You may well be right.  There appear to be dark forces at work here:

thom@swift:~/Development/data$ touch postmaster.pid
thom@swift:~/Development/data$ ls -l
total 0
thom@swift:~/Development/data$ touch file.txt
thom@swift:~/Development/data$ ls -l
total 8
-rw-rw-r-- 1 thom thom 0 2012-03-06 17:59 file.txt

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 18:01, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Tuesday, March 06, 2012 9:53:52 am Tom Lane wrote:
 Thom Brown t...@linux.com writes:
  /home/thom/Development/data was causing problems so:
 
  mv data databroken
  mkdir data
  initdb
 
  ... working fine again.  I then used the postmaster.pid from this when
  started up.  But if I do:
 
  pg_ctl stop
  rm -rf data
  mv databroken data
  initdb
 
  ... error messages appear again.

 Okay, so the question becomes: what is different between databroken and
 a freshly mkdir'd empty directory?  If there is no visible difference in
 contents, ownership, or permissions, then it seems like this is evidence
 of a filesystem bug (ie, apparently empty directory acts nonempty for
 some operations).

 A thought, what if you do rm -rf * in the data directory?

I've done that a couple times, but no effect.  I think Tom's point
about a filesystem bug is probably right.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 18:20, Tom Lane t...@sss.pgh.pa.us wrote:
 Bosco Rama postg...@boscorama.com writes:
 Thom Brown wrote:
 I've done that a couple times, but no effect.  I think Tom's point
 about a filesystem bug is probably right.

 Have you rebooted since this started?  There may be a process that is
 holding the pid file 'deleted but present' until the process terminates.

 Even if something is holding the file open, that wouldn't prevent unlink
 from removing the directory entry for it; or even if we were talking
 about a badly-designed filesystem that failed to follow standard Unix
 semantics, that wouldn't explain why the directory entry is apparently
 visible to some operations but not others.

 Still, I agree with your point: Thom should reboot and see if the
 misbehavior is still there, because that would be useful info for his
 bug report.

After a reboot, initdb completes successfully.  I don't think it
performed an fsck of any kind as I don't see it in the logs.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 18:51, dennis jenkins dennis.jenkins...@gmail.com wrote:
 On Tue, Mar 6, 2012 at 10:11 AM, Thom Brown t...@linux.com wrote:
 On 6 March 2012 16:04, Adrian Klaver adrian.kla...@gmail.com wrote:
 The postmaster.pid is located outside the data directory, but points back 
 to the
 data directory.   Not sure where Debian, though at a guess somewhere in 
 /var.
 Any way search for postmaster.pid.

 I'm not sure, because if I use a new data directory, initdb it and
 start the service, the postmaster.pid appears in it, and not as a
 symbolic link.

 I did a search for postmaster.pid in the whole of /var and it only
 shows up /var/lib/postgresql/9.1/main/postmaster.pid

 --
 Thom

 I know that I'm late to the party, but a small suggestion: Run
 initdb with strace (truss on Solaris) and examine the syscalls
 made.  It should show you, conclusively, what files are being
 opened, unlinked, etc...

 Example:

 strace -o /tmp/x initdb -D /tmp/data-1
 grep -E '^(open|unlink)' /tmp/x

The reboot removed the opportunity to do this unfortunately.  I'll
have to wait an see if it happens again, but if it does, I'll try the
suggestion.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unhelpful initdb error message

2012-03-06 Thread Thom Brown
On 6 March 2012 19:28, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 On 6 March 2012 18:20, Tom Lane t...@sss.pgh.pa.us wrote:
 Still, I agree with your point: Thom should reboot and see if the
 misbehavior is still there, because that would be useful info for his
 bug report.

 After a reboot, initdb completes successfully.  I don't think it
 performed an fsck of any kind as I don't see it in the logs.

 Fascinating.  So maybe there is something to Bosco's theory of something
 holding open the old pidfile.  But what would that be?  The postmaster
 doesn't hold it open, just write it and close it.

No idea.  I did run an lsof while the problem was still present and
grep'd for the directory as I too suspected there may be some process
thinking it still had a reference to the file, but there were no
matches.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I get list of views that are using given column in table?

2012-02-20 Thread Thom Brown
On 20 February 2012 12:06, hubert depesz lubaczewski dep...@depesz.com wrote:
 hi
 I have situation, where I need to change datatype of column.
 But when I do:
 alter table xx alter column yy type zz;
 i get error:
 ERROR:  cannot alter type of a column used by a view or rule
 DETAIL:  rule _RETURN on view some_view depends on column yy

 how can I get a list of all such views (in a sqlish way, so I could make a
 query to return all needed objects that need to be dropped/recreated).

You could try this:

SELECT distinct dependee.relname
FROM pg_depend
JOIN pg_rewrite ON pg_depend.objid = pg_rewrite.oid
JOIN pg_class as dependee ON pg_rewrite.ev_class = dependee.oid
JOIN pg_class as dependent ON pg_depend.refobjid = dependent.oid
JOIN pg_attribute ON pg_depend.refobjid = pg_attribute.attrelid
AND pg_depend.refobjsubid = pg_attribute.attnum
WHERE dependent.relname = tablename
AND pg_attribute.attnum  0
AND pg_attribute.attname = columnname;

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I get list of views that are using given column in table?

2012-02-20 Thread Thom Brown
On 20 February 2012 17:29, hubert depesz lubaczewski dep...@depesz.com wrote:
 On Mon, Feb 20, 2012 at 01:06:29PM +, Thom Brown wrote:
 You could try this:

 SELECT distinct dependee.relname
 FROM pg_depend
 JOIN pg_rewrite ON pg_depend.objid = pg_rewrite.oid
 JOIN pg_class as dependee ON pg_rewrite.ev_class = dependee.oid
 JOIN pg_class as dependent ON pg_depend.refobjid = dependent.oid
 JOIN pg_attribute ON pg_depend.refobjid = pg_attribute.attrelid
     AND pg_depend.refobjsubid = pg_attribute.attnum
 WHERE dependent.relname = tablename
 AND pg_attribute.attnum  0
 AND pg_attribute.attname = columnname;

 thanks. took me a while to understand it, so decided to make it a bit
 shorter, and change the join order to the order of data flow:

 SELECT
    distinct r.ev_class::regclass
 FROM
    pg_attribute    as a
    join pg_depend  as d on d.refobjid = a.attrelid AND d.refobjsubid = 
 a.attnum
    join pg_rewrite as r ON d.objid = r.oid
 WHERE
    pg_attribute.attrelid = 'table name'::regclass
    AND pg_attribute.attname = 'column name';

 but the logic in here is the same as in your query.

Yes, regclass will allow you to take a couple shortcuts and I'm not
sure why I didn't do that.  You'd need to correct your WHERE clause
though to use the 'a' alias you created.  I'd imagine that if you were
going to use such a query regularly, you'd need to add some extra
considerations into it to ensure you're not matching anything
incorrectly.  I only say this because I hadn't really put too much
thought into the query.  I don't know if it may inadvertently match
non-view objects.

Glad it helped in some way though.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Regex match not back-referencing in function

2012-02-12 Thread Thom Brown
Hi,

Could someone explain the following behaviour?

SELECT regexp_replace(E'Hello  goodbye ',E'([])','#' ||
ascii(E'\\1') || E';\\1');

This returns:

 regexp_replace

 Hello #92; goodbye
(1 row)

So it matched:

SELECT chr(92);
 chr
-
 \
(1 row)

But notice that when I append the value it's supposed to have matched
to the end of the replacement value, it shows it should be ''.

Just to confirm:

SELECT ascii('');
 ascii
---
38
(1 row)

So I'd expect the output of the original statement to be:

 regexp_replace

 Hello #38; goodbye
(1 row)

What am I missing?

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regex match not back-referencing in function

2012-02-12 Thread Thom Brown
On 12 February 2012 18:49, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 What am I missing?

 I might be more confused than you, but I think you're supposing that
 the result of ascii(E'\\1') has something to do with the match that
 the surrounding regexp_replace function will find, later on when it
 gets executed.  The actual arguments seen by regexp_replace are

 regression=# select E'Hello  goodbye ',E'([])','#' ||
 ascii(E'\\1') || E';\\1';
     ?column?     | ?column? | ?column?
 --+--+--
  Hello  goodbye  | ([])    | #92;\1
 (1 row)

 and given that, the result looks perfectly fine to me.

 If there's a bug here, it's that ascii() ignores additional bytes in its
 input instead of throwing an error for a string with more than one
 character.  But I believe we've discussed that in the past and decided
 not to change it.

Okay, in that case I made the wrong assumptions about order of resolution.

Thanks

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can't find data type CIText or CI_Text

2012-01-16 Thread Thom Brown
On 16 January 2012 20:15, Heine Ferreira heine.ferre...@gmail.com wrote:
 Hi

 I was told by someone that in order to store text that isn't case sensitive
 in comparisons I must use CIText or CI_Text.
 I can't find the data type? Also is this  char or varchar data type? Can you
 create an index on this data type?

It's provided by a contrib module so not available by default.  In
PostgreSQL 9.1 you can just run the following in whichever database
you wish it to be made available in:

CREATE EXTENSION citext;

If you're on 9.0 or below, you will need to run a script, the location
of which varies depending on your platform.  See
http://www.postgresql.org/docs/9.0/static/contrib.html

Regards

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is the a magic number for WAL files

2011-12-09 Thread Thom Brown
On 9 December 2011 18:46, Rob Sargent robjsarg...@gmail.com wrote:
 Along the same lines, what info is embedded in the file name? I see that
 the second non-zero recently went from 2 to 3.  Significance?


 0001003000CF
              ^
 --|

The WAL file name consists of timeline, segment set/segment block and
segment,  Once the segment (the last 8 characters of the file name)
reaches 00FE, the next file will have a segment  but
characters 9-16 will increment their value to reflect this wraparound.
 So it's not any more significant that 1 added to 99 results in it
becoming 00 with a 1 before it.

-- 
Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] .dmp files in postgresql

2011-11-25 Thread Thom Brown
On 25 November 2011 20:04, Alpha Beta dzjit...@gmail.com wrote:
 Hi list,
 I have got two files (template1.dmp, example1.dmp) and I want to open them
 in postgresql, I tried the following commands:
 - Import template1 :
     * createdb -U postgres template1
     * psql -U postgres template1  template1.dmp
 - Create the sample of the database:
     * createdb -U postgres example1
     * psql -U postgres example1  example1.dmp
 I've got no error reported by postgres but nothing happened either.
 How should I proceed exactly? Sorry I'm a newbie with PostgreSQL :)

I'm assuming this is a custom-format file, which is a binary file.
You need to use pg_restore to restore the file:

For example:

pg_restore -d example1 example1.dmp

This is because the file is no in plain format.  Read the
documentation for more information on the output of pg_dump:
http://www.postgresql.org/docs/current/static/app-pgdump.html

Regards

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] .dmp files in postgresql

2011-11-25 Thread Thom Brown
On 25 November 2011 20:31, Alpha Beta dzjit...@gmail.com wrote:
 While you say, I opened the file with bloc note and I noticed that it's not
 a binary file but plain with SQL commands and so on.
 I tried what you said also but didn't work.
 Any suggestion? or maybe the commands I'm using doesn't find the path for
 the 2 files?

If it couldn't find the file, you'd receive an error message.

How do you know nothing happened?  Did you check the database you
loaded the file into for the objects it's supposed to create?  From
looking at the files, what do they create/insert etc?

So when you loaded the example1.dmp file into the example1 database,
did you then connect to that database and look for restored items?

Regards

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exporting 1 text column from all rows from a table to a file per row?

2011-11-21 Thread Thom Brown
On 21 November 2011 19:10, Joost Kraaijeveld j.kraaijev...@askesis.nl wrote:
 Hi,

 Is it possible, and if so how, to export a single column of a table into
 a separate file per row? I have a table with ~21000 rows that have a
 column body1 containing ASCII text and I want to have 21000 separate
 ASCII files, each containing that column body1. The name of the file
 does not matter, although it would be nice if they had the extension
 txt.

Does the data contain newlines?  If not, you can just export it to a
single file then use:

split -l 1 exportedfile.txt

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] VACUUM touching file but not updating relation

2011-11-18 Thread Thom Brown
On 12 November 2011 00:08, Thom Brown t...@linux.com wrote:
 On 11 November 2011 23:28, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 On 11 November 2011 00:55, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 I just noticed that the VACUUM process touches a lot of relations
 (affects mtime) but for one file I looked at, it didn't change.  This
 doesn't always happen, and many relations aren't touched at all.

 No immmediate ideas as to why the mtime would change if the file
 contents didn't.  It seems like there must be a code path that marked
 a buffer dirty without having changed it, but we're usually pretty
 careful about that.

 I checked all files where the time stamp of the file had changed, but
 had the same MD5 sum.  I used the list in the query you mentioned and
 get: [ mostly indexes ]

 Hmm, is this on a hot standby master?

 It's using a wal_level of hot_standby and has max_wal_senders set to
 2, but it's not actually replicating to anywhere else.  But if I
 comment out both of these, restart, then compare pre-vacuum and
 post-vacuum, I get the following results for unchanged but touched
 items:

 test=# select oid,relname from pg_class where relfilenode in
 (11680,11682,11684,11686,11690,16530);
  oid  |       relname
 ---+-
  2619 | pg_statistic
  2840 | pg_toast_2619
  2841 | pg_toast_2619_index
  16530 | cows2
 (4 rows)

 The items which didn't match a result in this instance were 11686 and
 11690, which is surprising since they both have a visibility map and
 free space map, indicating they're some kind of table.

 I observe that _bt_delitems_vacuum() unconditionally dirties the page
 and writes a WAL record, whether it has anything to do or not; and that
 if XLogStandbyInfoActive() then btvacuumscan will indeed call it despite
 there being (probably) nothing useful to do.  Seems like that could be
 improved.  The comment explaining why it's necessary to do that doesn't
 make any sense to me, either.

 Well the effect, in the single instances I've checked, is certainly
 more pronounced for hot_standby, but there still appears to be some
 occurrences for minimal wal_level too.

So would you say this is acceptable and normal activity, or is
something awry here?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] VACUUM touching file but not updating relation

2011-11-11 Thread Thom Brown
On 11 November 2011 23:28, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 On 11 November 2011 00:55, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 I just noticed that the VACUUM process touches a lot of relations
 (affects mtime) but for one file I looked at, it didn't change.  This
 doesn't always happen, and many relations aren't touched at all.

 No immmediate ideas as to why the mtime would change if the file
 contents didn't.  It seems like there must be a code path that marked
 a buffer dirty without having changed it, but we're usually pretty
 careful about that.

 I checked all files where the time stamp of the file had changed, but
 had the same MD5 sum.  I used the list in the query you mentioned and
 get: [ mostly indexes ]

 Hmm, is this on a hot standby master?

It's using a wal_level of hot_standby and has max_wal_senders set to
2, but it's not actually replicating to anywhere else.  But if I
comment out both of these, restart, then compare pre-vacuum and
post-vacuum, I get the following results for unchanged but touched
items:

test=# select oid,relname from pg_class where relfilenode in
(11680,11682,11684,11686,11690,16530);
  oid  |   relname
---+-
  2619 | pg_statistic
  2840 | pg_toast_2619
  2841 | pg_toast_2619_index
 16530 | cows2
(4 rows)

The items which didn't match a result in this instance were 11686 and
11690, which is surprising since they both have a visibility map and
free space map, indicating they're some kind of table.

 I observe that _bt_delitems_vacuum() unconditionally dirties the page
 and writes a WAL record, whether it has anything to do or not; and that
 if XLogStandbyInfoActive() then btvacuumscan will indeed call it despite
 there being (probably) nothing useful to do.  Seems like that could be
 improved.  The comment explaining why it's necessary to do that doesn't
 make any sense to me, either.

Well the effect, in the single instances I've checked, is certainly
more pronounced for hot_standby, but there still appears to be some
occurrences for minimal wal_level too.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] VACUUM touching file but not updating relation

2011-11-10 Thread Thom Brown
On 14 October 2011 12:12, Thom Brown t...@linux.com wrote:
 Hi,

 I just noticed that the VACUUM process touches a lot of relations
 (affects mtime) but for one file I looked at, it didn't change.  This
 doesn't always happen, and many relations aren't touched at all.

 I had the following relation:

 -rw---  1 thom  staff      40960 13 Oct 16:06 11946

 Ran MD5 over the file:

 MD5 (11946) = d6626f930f1fb6d77c3907d3279fe693

 Then VACUUM ANALYSE'd all databases in full.

 This relation was supposedly affected:

 -rw---  1 thom  staff      40960 14 Oct 11:27 11946

 But then I ran MD5 back over it:

 MD5 (11946) = d6626f930f1fb6d77c3907d3279fe693

 This is the same as before.  What is it doing?  Does this happen
 often?  And I can't find out what this particular OID relates to
 either.

 I'm using 9.2devel btw.

Does anyone know what happened here?  I'm just wondering if there's
some action being performed on the file which can be avoided.  Of
course I haven't determined how often this happens.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] VACUUM touching file but not updating relation

2011-11-10 Thread Thom Brown
On 11 November 2011 00:55, Tom Lane t...@sss.pgh.pa.us wrote:
 Thom Brown t...@linux.com writes:
 On 14 October 2011 12:12, Thom Brown t...@linux.com wrote:
 I just noticed that the VACUUM process touches a lot of relations
 (affects mtime) but for one file I looked at, it didn't change.  This
 doesn't always happen, and many relations aren't touched at all.

 No immmediate ideas as to why the mtime would change if the file
 contents didn't.  It seems like there must be a code path that marked
 a buffer dirty without having changed it, but we're usually pretty
 careful about that.

 And I can't find out what this particular OID relates to
 either.

 Well, the generic method is

 regression=# select oid,relname from pg_class where relfilenode = 11946;
  oid  |    relname
 ---+
  11563 | pg_toast_11561
 (1 row)

 This is a toast relation, so ...

 regression=# select oid,relname from pg_class where reltoastrelid = 11563;
  oid  |   relname
 ---+--
  11561 | sql_packages
 (1 row)

 So in my git-tip database, that relfilenode is
 information_schema.sql_packages' TOAST table.  However, such OID
 assignments aren't terribly stable in development tip, and it was almost
 certainly something different a month ago (especially since
 sql_packages' TOAST table is generally empty, and your file is not).
 So you'll need to check this for yourself to see what it was, assuming
 you still have that database around.  It's a safe bet that it was a
 system catalog or index or toast table belonging thereto, though, just
 based on the range of OIDs it's in.

No, I don't still have the database, but tried the same thing on a
pre-existing database and found a few files exhibiting the same
change.

I checked all files where the time stamp of the file had changed, but
had the same MD5 sum.  I used the list in the query you mentioned and
get:

test2=# select oid,relname from pg_class where relfilenode in
(11682,11692,11707,11708,11725,11726,11727,11728,11740,11743,11744,11751,11752,11757,11761,11764,11765,11771,11776,11777,11778,11795,11816,11817,11854,11855,11858,11861,11862,11865,11866,11869,11870,11873,11874,11901,11902);

 oid  | relname
--+-
 2664 | pg_constraint_conname_nsp_index
 2651 | pg_am_name_index
 2652 | pg_am_oid_index
 2756 | pg_amop_oid_index
 2757 | pg_amproc_oid_index
 2650 | pg_aggregate_fnoid_index
 2839 | pg_toast_2618_index
 2660 | pg_cast_oid_index
 3085 | pg_collation_oid_index
 3164 | pg_collation_name_enc_nsp_index
 2689 | pg_operator_oprname_l_r_n_index
 2754 | pg_opfamily_am_name_nsp_index
 2755 | pg_opfamily_oid_index
 2681 | pg_language_name_index
 2682 | pg_language_oid_index
 2692 | pg_rewrite_oid_index
 2693 | pg_rewrite_rel_rulename_index
 2673 | pg_depend_depender_index
 2674 | pg_depend_reference_index
 3608 | pg_ts_config_cfgname_index
 3712 | pg_ts_config_oid_index
 3609 | pg_ts_config_map_index
 3604 | pg_ts_dict_dictname_index
 3605 | pg_ts_dict_oid_index
 3606 | pg_ts_parser_prsname_index
 3607 | pg_ts_parser_oid_index
 3766 | pg_ts_template_tmplname_index
 3767 | pg_ts_template_oid_index
 3080 | pg_extension_oid_index
 2840 | pg_toast_2619
 2665 | pg_constraint_conrelid_index
 2666 | pg_constraint_contypid_index
 2667 | pg_constraint_oid_index
 3081 | pg_extension_name_index
(34 rows)

An additional VACUUM shows up no such changes except for the case of a
visibility map, although I suspect that's expected to happen.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] IP range in pg_hba.conf?

2011-11-08 Thread Thom Brown
2011/11/8 Zhidong zhidong@gmail.com:
 Can you guys explain why it is 10.17.64.0/20? Thanks!

 = 255 in binary
 = 240 in binary

So a CIDR mask of 8 would cover the first 8 bits, 16 the next 8, but
when we reach 20 we've covered 20 bits.

The first 255 is the first 8 bits.  The next 255 is bits 9-16.  Bits
17-20 brings it up to 240.  The rest are zeros.

0.0.0.0 = /0
255.0.0.0 = /8
255.255.0.0 = /16
255.255.255.0 = /24
255.255.255.255 = /32

And inbetween you get:

255.255.240.0 = /20

2552552400
   
First 20 binary digits are masked.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication woes

2011-11-04 Thread Thom Brown
On 4 November 2011 17:19, Sean Patronis spatro...@add123.com wrote:
 On 11/04/2011 10:59 AM, Thom Brown wrote:

 On 4 November 2011 16:50, Sean Patronisspatro...@add123.com  wrote:

 I am running Postgres 9.1

 I have followed the howto here:
 http://wiki.postgresql.org/wiki/Streaming_Replication

 I am attempting to replicate an existing database.

 On the Master, I get the following error in the postgres log file:

 FATAL:  must be replication role to start walsender


 On the slave I get this:
 FATAL:  could not connect to the primary server: FATAL:  must be
 replication
 role to start walsender

 I have googled both of those log entries to no avail.

 note that the sender process on the master is not running.

 What simple step am I missing?

 What have you got primary_conninfo set to on the standby in
 recovery.conf?  Are you trying to use a regular user?  If so, you will
 have to grant it REPLICATION permissions on the primary, which was
 introduced in 9.1.



 The primary_conninfo in the recovery.conf is set to :
 primary_conninfo      = 'host=192.168.127.12 port=5432 user=postgres'

 So I should just have to grant the postgres user REPLICATION permissions,
 and be good?

Well the postgres user will be a superuser, so doesn't need to be
granted such a permission.

Have you got the necessary entry in pg_hba.conf as Raghavendra
highlighted?  It will need configuring to accept a connection from the
IP address of the standby server.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Named column default expression

2011-10-28 Thread Thom Brown
On 28 October 2011 08:29, Thomas Kellerer spam_ea...@gmx.net wrote:
 Hello,

 I just noticed that Postgres allows the following syntax:

 create table foo
 (
    id integer constraint id_default_value default 42
 );

 But as far as I can tell the constraint id_default_value part seems to be
 only syntactical sugar as this is stored nowhere. At least I couldn't find
 it going through the catalog tables and neither pg_dump -s or pgAdmin are
 showing that name in the generated SQL source for the table.

 It's not important, I'm just curious why the syntax is accepted (I never saw
 a default value as a constraint) and if there is a way to retrieve that
 information once the table is created.

It would do something with it if you actually defined a constraint
after it, but since you didn't, it throws it away since there's
nothing to enforce.  So if you adjust it to:

create table foo
(
   id integer constraint id_default_value check (id  4) default 42
);

a constraint for that column will be created with the specified name.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] VACUUM touching file but not updating relation

2011-10-14 Thread Thom Brown
Hi,

I just noticed that the VACUUM process touches a lot of relations
(affects mtime) but for one file I looked at, it didn't change.  This
doesn't always happen, and many relations aren't touched at all.

I had the following relation:

-rw---  1 thom  staff  40960 13 Oct 16:06 11946

Ran MD5 over the file:

MD5 (11946) = d6626f930f1fb6d77c3907d3279fe693

Then VACUUM ANALYSE'd all databases in full.

This relation was supposedly affected:

-rw---  1 thom  staff  40960 14 Oct 11:27 11946

But then I ran MD5 back over it:

MD5 (11946) = d6626f930f1fb6d77c3907d3279fe693

This is the same as before.  What is it doing?  Does this happen
often?  And I can't find out what this particular OID relates to
either.

I'm using 9.2devel btw.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.2devel] why it doesn't do index scan only?

2011-10-09 Thread Thom Brown
On 9 October 2011 04:35, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/8 Thom Brown t...@linux.com:
 On 8 October 2011 21:13, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/8 Thom Brown t...@linux.com:
 On 8 October 2011 19:47, Pavel Stehule pavel.steh...@gmail.com wrote:
 I did it. It is strange, so your times are significantly slower than I
 have. Have you enabled asserts?

 The table contains 15 million rows with column values randomly
 selected from the 1-350 range, with 60% within the 1-50 range, and
 asserts are enabled.


 Now I repeated tests on litlle bit wide table with 9 milion rows, but
 without success.

 Try to disable asserts. I am not sure, but maybe there significantlly
 change a speed.

 Okay, here you go.  Results with debug_assertions = false:

 Index-only scan: 173.389 ms (78.442 ms)
 Index scan: 184239.399 ms (previously 164882.666 ms)
 Bitmap scan: 159354.261 ms (previously 154107.415 ms)
 Sequential scan: 134552.263 ms (previously 121296.999 ms)

 So no particularly significant difference, except with the index-only
 scan (which I repeated 3 times and it's about the same each time).

 what is size of table?

 4884MB

 It has a sense - index only scan  it is faster (and significantly
 faster) on wider tables - or tables with strings where TOAST is not
 active. Maybe there is a some issue because on thin tables is slower
 (and I expect a should be faster everywhere).

No, that's my point, I re-tested it on a table with just 2 int
columns, and the results are roughly the same.  I added all the
columns to make it expensive to fetch the  column being queried.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.2devel] why it doesn't do index scan only?

2011-10-09 Thread Thom Brown
On 9 October 2011 11:51, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/9 Thom Brown t...@linux.com:
 On 9 October 2011 04:35, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/8 Thom Brown t...@linux.com:
 On 8 October 2011 21:13, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/8 Thom Brown t...@linux.com:
 On 8 October 2011 19:47, Pavel Stehule pavel.steh...@gmail.com wrote:
 I did it. It is strange, so your times are significantly slower than I
 have. Have you enabled asserts?

 The table contains 15 million rows with column values randomly
 selected from the 1-350 range, with 60% within the 1-50 range, and
 asserts are enabled.


 Now I repeated tests on litlle bit wide table with 9 milion rows, but
 without success.

 Try to disable asserts. I am not sure, but maybe there significantlly
 change a speed.

 Okay, here you go.  Results with debug_assertions = false:

 Index-only scan: 173.389 ms (78.442 ms)
 Index scan: 184239.399 ms (previously 164882.666 ms)
 Bitmap scan: 159354.261 ms (previously 154107.415 ms)
 Sequential scan: 134552.263 ms (previously 121296.999 ms)

 So no particularly significant difference, except with the index-only
 scan (which I repeated 3 times and it's about the same each time).

 what is size of table?

 4884MB

 It has a sense - index only scan  it is faster (and significantly
 faster) on wider tables - or tables with strings where TOAST is not
 active. Maybe there is a some issue because on thin tables is slower
 (and I expect a should be faster everywhere).

 No, that's my point, I re-tested it on a table with just 2 int
 columns, and the results are roughly the same.  I added all the
 columns to make it expensive to fetch the  column being queried.

 then I don't understand

Well here's some more reliable results since each has been run 3
times.  The row size in this sample is a 10th of previous ones (i.e.
1.5 million rows):

-- Narrow table (table with 2 int columns) --

Assertions = on

Index-only scan = 6.088, 5.885, 6.361
Index scan = 65.661, 63.441, 64.105
Bitmap scan = 68.448, 66.476, 64.626
Sequential scan = 244.129, 239.584, 242.680

Assertions = off

Index-only scan = 6.710, 6.709, 6.192
Index scan = 66.838, 67.534, 64.348
Bitmap scan = 75.662, 64.500, 69.080
Sequential scan = 232.065, 231.366, 231.547


-- Wide table (table as described in earlier post) --

Assertions = on

Index-only scan = 7.313, 7.299, 6.401
Index scan = 193.555, 186.564, 198.150
Bitmap scan = 199.082, 204.664, 207.902
Sequential scan = 643.765, 645.426, 621.150

Assertions = off

Index-only scan = 7.569, 6.477, 6.113
Index scan = 197.332, 197.286, 204.257
Bitmap scan = 207.838, 202.235, 203.322
Sequential scan = 653.739, 633.309, 619.081

Bitmap scans tend not to be faster than index scans in this scenario.
These numbers are produced by HEAD as of this morning.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.2devel] why it doesn't do index scan only?

2011-10-09 Thread Thom Brown
On 9 October 2011 18:38, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/9 Pavel Stehule pavel.steh...@gmail.com:
 2011/10/9 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 2011/10/9 Thom Brown t...@linux.com:
 On 9 October 2011 04:35, Pavel Stehule pavel.steh...@gmail.com wrote:
 It has a sense - index only scan  it is faster (and significantly
 faster) on wider tables - or tables with strings where TOAST is not
 active. Maybe there is a some issue because on thin tables is slower
 (and I expect a should be faster everywhere).

 No, that's my point, I re-tested it on a table with just 2 int
 columns, and the results are roughly the same.  I added all the
 columns to make it expensive to fetch the  column being queried.

 then I don't understand

 Are you sure you've remembered to vacuum the test table?  I get results
 like yours (ie, no speed benefit for index-only scan) if the table
 doesn't have its visibility-map bits set.

 it should be - I didn't do VACUUM


 yes, After VACUUM I got a significantly better times - index only scan
 is about 5-6x better

Something that I was expecting the planner to do with this patch,
which it doesn't, is pull in the index for queries like:

SELECT count(*) from my_table;

or

SELECT sum(indexed_column) from my_table;

I don't see why a non-partial index can't fulfill these queries.  I
can only get index-only scans with WHERE conditions.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.2devel] why it doesn't do index scan only?

2011-10-08 Thread Thom Brown
On 8 October 2011 18:53, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 2011/10/8 Tom Lane t...@sss.pgh.pa.us:
 hubert depesz lubaczewski dep...@depesz.com writes:
 it is selecting 20 rows out of 30 million. why is it:
 1. not using index only scan
 2. not using even normal index scan?

 It thinks the bitmap scan is cheaper.  Whether that's true or not is not
 very clear, but nobody is claiming that the costing of index-only scans
 is accurate yet.


 I did a few tests and bitmap scan is faster. Maybe there is a some
 issue. In very simple test (and very syntetic test)

 create table omega(a int);
 insert into omega select (random()*1)::int from generate_series(1,40);

 select count(*) from omega where a = 100;

 and index scan is faster than index only scan. There is lot of
 duplicates. When I used a bigger range, a speed of bitmap index, index
 only scan and index scan is similar - but index scan was faster
 everywhere.

Here, index-only scan is massively faster than any other scan:

test=# explain analyse select count(thing) from stuff where thing = 14;

QUERY PLAN

 Aggregate  (cost=99814.38..99814.39 rows=1 width=4) (actual
time=337.506..337.506 rows=1 loops=1)
   -  Index Only Scan using idx_stuff_thing on stuff
(cost=0.00..99336.88 rows=191000 width=4) (actual
time=155.955..315.106 rows=196828 loops=1)
 Index Cond: (thing = 14)
 Total runtime: 337.639 ms
(4 rows)

test=# set enable_indexonlyscan to false;
SET
test=# explain analyse select count(thing) from stuff where thing = 14;

QUERY PLAN

 Aggregate  (cost=99814.38..99814.39 rows=1 width=4) (actual
time=164882.528..164882.528 rows=1 loops=1)
   -  Index Scan using idx_stuff_thing on stuff  (cost=0.00..99336.88
rows=191000 width=4) (actual time=0.184..164494.806 rows=196828
loops=1)
 Index Cond: (thing = 14)
 Total runtime: 164882.666 ms
(4 rows)

test=# set enable_indexonlyscan to false;
SET
test=# set enable_indexscan to false;
SET
test=# explain analyse select count(thing) from stuff where thing = 14;

QUERY PLAN
---
 Aggregate  (cost=170553.91..170553.92 rows=1 width=4) (actual
time=154102.221..154102.222 rows=1 loops=1)
   -  Bitmap Heap Scan on stuff  (cost=2004.91..170076.41 rows=191000
width=4) (actual time=482.974..153730.892 rows=196828 loops=1)
 Recheck Cond: (thing = 14)
 -  Bitmap Index Scan on idx_stuff_thing  (cost=0.00..1957.16
rows=191000 width=0) (actual time=421.854..421.854 rows=196828
loops=1)
   Index Cond: (thing = 14)
 Total runtime: 154107.415 ms
(6 rows)

test=# set enable_indexonlyscan to false;
SET
test=# set enable_indexscan to false;
SET
test=# set enable_bitmapscan to false;
SET
test=# explain analyse select count(thing) from stuff where thing = 14;
QUERY PLAN
--
 Aggregate  (cost=812977.50..812977.51 rows=1 width=4) (actual
time=121296.897..121296.897 rows=1 loops=1)
   -  Seq Scan on stuff  (cost=0.00..812500.00 rows=191000 width=4)
(actual time=67.105..121215.296 rows=196828 loops=1)
 Filter: (thing = 14)
 Rows Removed by Filter: 14803172
 Total runtime: 121296.999 ms
(5 rows)

Note: buffer cache cleared between queries.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.2devel] why it doesn't do index scan only?

2011-10-08 Thread Thom Brown
On 8 October 2011 19:30, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/8 Thom Brown t...@linux.com:
 On 8 October 2011 18:53, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 2011/10/8 Tom Lane t...@sss.pgh.pa.us:
 hubert depesz lubaczewski dep...@depesz.com writes:
 it is selecting 20 rows out of 30 million. why is it:
 1. not using index only scan
 2. not using even normal index scan?

 It thinks the bitmap scan is cheaper.  Whether that's true or not is not
 very clear, but nobody is claiming that the costing of index-only scans
 is accurate yet.


 I did a few tests and bitmap scan is faster. Maybe there is a some
 issue. In very simple test (and very syntetic test)

 create table omega(a int);
 insert into omega select (random()*1)::int from 
 generate_series(1,40);

 select count(*) from omega where a = 100;

 and index scan is faster than index only scan. There is lot of
 duplicates. When I used a bigger range, a speed of bitmap index, index
 only scan and index scan is similar - but index scan was faster
 everywhere.

 Here, index-only scan is massively faster than any other scan:

 test=# explain analyse select count(thing) from stuff where thing = 14;

 QUERY PLAN
 
  Aggregate  (cost=99814.38..99814.39 rows=1 width=4) (actual
 time=337.506..337.506 rows=1 loops=1)
   -  Index Only Scan using idx_stuff_thing on stuff
 (cost=0.00..99336.88 rows=191000 width=4) (actual
 time=155.955..315.106 rows=196828 loops=1)
         Index Cond: (thing = 14)
  Total runtime: 337.639 ms
 (4 rows)

 test=# set enable_indexonlyscan to false;
 SET
 test=# explain analyse select count(thing) from stuff where thing = 14;

 QUERY PLAN
 
  Aggregate  (cost=99814.38..99814.39 rows=1 width=4) (actual
 time=164882.528..164882.528 rows=1 loops=1)
   -  Index Scan using idx_stuff_thing on stuff  (cost=0.00..99336.88
 rows=191000 width=4) (actual time=0.184..164494.806 rows=196828
 loops=1)
         Index Cond: (thing = 14)
  Total runtime: 164882.666 ms
 (4 rows)

 test=# set enable_indexonlyscan to false;
 SET
 test=# set enable_indexscan to false;
 SET
 test=# explain analyse select count(thing) from stuff where thing = 14;

 QUERY PLAN
 ---
  Aggregate  (cost=170553.91..170553.92 rows=1 width=4) (actual
 time=154102.221..154102.222 rows=1 loops=1)
   -  Bitmap Heap Scan on stuff  (cost=2004.91..170076.41 rows=191000
 width=4) (actual time=482.974..153730.892 rows=196828 loops=1)
         Recheck Cond: (thing = 14)
         -  Bitmap Index Scan on idx_stuff_thing  (cost=0.00..1957.16
 rows=191000 width=0) (actual time=421.854..421.854 rows=196828
 loops=1)
               Index Cond: (thing = 14)
  Total runtime: 154107.415 ms
 (6 rows)

 test=# set enable_indexonlyscan to false;
 SET
 test=# set enable_indexscan to false;
 SET
 test=# set enable_bitmapscan to false;
 SET
 test=# explain analyse select count(thing) from stuff where thing = 14;
                                                        QUERY PLAN
 --
  Aggregate  (cost=812977.50..812977.51 rows=1 width=4) (actual
 time=121296.897..121296.897 rows=1 loops=1)
   -  Seq Scan on stuff  (cost=0.00..812500.00 rows=191000 width=4)
 (actual time=67.105..121215.296 rows=196828 loops=1)
         Filter: (thing = 14)
         Rows Removed by Filter: 14803172
  Total runtime: 121296.999 ms
 (5 rows)

 Note: buffer cache cleared between queries.

 I did it. It is strange, so your times are significantly slower than I
 have. Have you enabled asserts?

The table contains 15 million rows with column values randomly
selected from the 1-350 range, with 60% within the 1-50 range, and
asserts are enabled.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.2devel] why it doesn't do index scan only?

2011-10-08 Thread Thom Brown
On 8 October 2011 19:47, Pavel Stehule pavel.steh...@gmail.com wrote:
 I did it. It is strange, so your times are significantly slower than I
 have. Have you enabled asserts?

 The table contains 15 million rows with column values randomly
 selected from the 1-350 range, with 60% within the 1-50 range, and
 asserts are enabled.


 Now I repeated tests on litlle bit wide table with 9 milion rows, but
 without success.

 Try to disable asserts. I am not sure, but maybe there significantlly
 change a speed.

Okay, here you go.  Results with debug_assertions = false:

Index-only scan: 173.389 ms (78.442 ms)
Index scan: 184239.399 ms (previously 164882.666 ms)
Bitmap scan: 159354.261 ms (previously 154107.415 ms)
Sequential scan: 134552.263 ms (previously 121296.999 ms)

So no particularly significant difference, except with the index-only
scan (which I repeated 3 times and it's about the same each time).

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.2devel] why it doesn't do index scan only?

2011-10-08 Thread Thom Brown
On 8 October 2011 21:13, Pavel Stehule pavel.steh...@gmail.com wrote:
 2011/10/8 Thom Brown t...@linux.com:
 On 8 October 2011 19:47, Pavel Stehule pavel.steh...@gmail.com wrote:
 I did it. It is strange, so your times are significantly slower than I
 have. Have you enabled asserts?

 The table contains 15 million rows with column values randomly
 selected from the 1-350 range, with 60% within the 1-50 range, and
 asserts are enabled.


 Now I repeated tests on litlle bit wide table with 9 milion rows, but
 without success.

 Try to disable asserts. I am not sure, but maybe there significantlly
 change a speed.

 Okay, here you go.  Results with debug_assertions = false:

 Index-only scan: 173.389 ms (78.442 ms)
 Index scan: 184239.399 ms (previously 164882.666 ms)
 Bitmap scan: 159354.261 ms (previously 154107.415 ms)
 Sequential scan: 134552.263 ms (previously 121296.999 ms)

 So no particularly significant difference, except with the index-only
 scan (which I repeated 3 times and it's about the same each time).

 what is size of table?

4884MB

Here's how I set it up:

create table stuff (id serial, a text, b int, c int, d text, e text, f
int, g int, h text, thing int);

insert into stuff (a,b,c,d,e,f,g,h,thing) select 'nfewiufn weoifn
weiou fnwe fnoi weuoifn woeuifniunfeiwodnjnfdoewnfjienfjioe
wneinrienrf',53424524,26575757,'fhdsouhfuisdhfiudshif sdhiuf iu hfius
dhfiu sdiuf dsihfiusdhiufhdsiufhdsiu fhiuds fhiusd fiuds fidsf iuds
fiu dsiuf 
sdhiu','dfoisdhjfojdsoifjdiosjfoidsjiofjds',52944234,5246463,'fdfidsjoifjdsoifjoidsjfodsjfd',ceil(random()*350)
from generate_series(1,600);

insert into stuff (a,b,c,d,e,f,g,h,thing) select 'nfewiufn weoifn
weiou fnwe fnoi weuoifn woeuifniunfeiwodnjnfdoewnfjienfjioe
wneinrienrf',53424524,26575757,'fhdsouhfuisdhfiudshif sdhiuf iu hfius
dhfiu sdiuf dsihfiusdhiufhdsiufhdsiu fhiuds fhiusd fiuds fidsf iuds
fiu dsiuf 
sdhiu','dfoisdhjfojdsoifjdiosjfoidsjiofjds',52944234,5246463,'fdfidsjoifjdsoifjoidsjfodsjfd',ceil(random()*50)
from generate_series(1,900);

create index idx_stuff_thing on stuff (thing);

vacuum analyse stuff;

Testing without all the extra columns in the table yields a similar
result pattern.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Log Apply Delay

2011-09-16 Thread Thom Brown
On 16 September 2011 16:41, Ian Harding harding@gmail.com wrote:
 On Fri, Sep 16, 2011 at 8:35 AM, hubert depesz lubaczewski
 dep...@depesz.com wrote:
 On Fri, Sep 16, 2011 at 08:02:31AM -0700, Ian Harding wrote:
 Oracle has a configuration option for its version of hot standby
 (DataGuard) that lets you specify a time based delay in applying logs.
  They get transferred right away, but changes in them are only applied
 as they reach a certain age.  The idea is that if something horrible
 happens on the master, you can keep it from propagating to one or more
 of your standby databases (or keep from having to reinstate one in the
 case of a failover)

 Anyway, Is there any plan to add a knob like that to the streaming
 replication in Postgres?

 In streaming - no. But if you want delay, perhaps normal WAL-files based
 approach would be good enough? OmniPITR, for one, has a option to delay
 applying wal segments.


 The file based approach is pretty close, unless the Bad Thing happens
 right before a file gets transferred.  This is not a super important
 feature to me but It's a nice security blanket and almost takes the
 place of a PITR plan including big file transfers of the data
 directory at regular intervals.

You could always ship the log to a waiting directory on the
destination server, then run a command like this every few mins:

find /holding/dir -maxdepth 1 -mtime +1 -exec mv '{}' /actual/dir/ ';'

That particular command would move all files over a day old to the
directory the standby is looking at.

Or change +1 to +1h to leave a gap of an hour instead of a day.
-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Log duration and statement for slow queries + limiting the number of log files generated

2011-09-16 Thread Thom Brown
On 16 September 2011 18:16, Alec Swan alecs...@gmail.com wrote:
 Hello,

 I am trying to get postgres 8.4.4 to log the duration and statement of
 queries that take longer than 200 ms. I played with the log settings
 in postgresql.conf but I still see logs of durations of very fast
 statements. Here is my current configuration:

 log_min_duration_statement = 200
 log_duration = on
 log_line_prefix = '%t '
 log_statement = 'all'

 With this configuration I am still getting these in the log files:

 2011-09-02 14:47:19 EDT LOG:  duration: 0.017 ms
 2011-09-02 14:47:19 EDT LOG:  execute unnamed: SET SESSION
 CHARACTERISTICS AS TRANSACTION READ WRITE
 2011-09-02 14:47:19 EDT LOG:  duration: 0.004 ms
 2011-09-02 14:47:19 EDT LOG:  duration: 0.014 ms
 2011-09-02 14:47:19 EDT LOG:  duration: 0.017 ms
 2011-09-02 14:47:19 EDT LOG:  execute unnamed: SET SESSION
 CHARACTERISTICS AS TRANSACTION READ ONLY


 How do I filter these out?

You have log_statement set to 'all', and since it's mostly independent
of log_min_duration_statement, it's logging everything anyway.  You
may wish to just set log_statement to 'none'.

 Also, how do I restrict the maximum number of log files generated to 2
 with the assumption that they will roll over when filled?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting one of 2 identical records

2011-09-06 Thread Thom Brown
On 6 September 2011 18:39, Gauthier, Dave dave.gauth...@intel.com wrote:

 Hi:

 ** **

 If I have a table that has 2 records which are identical with regard to all
 their column values, is there a way to delete one of them, leaving one
 remaining?  Is there some unique record_id key of some sort I can use for
 somethign like this?

 ** **

 Thanks in Advance!


Yes, identify them by their ctid value.

So get the ctids by running:

SELECT ctid, *
FROM my_table
WHERE clause to identify duplicate rows

You will see entries which look like (7296,11).

You can then delete the row by referencing it in the DELETE statement.  For
example:

DELETE FROM my_table
WHERE ctid = '(7296,11)';

It's a shame we don't have a LIMIT on the DELETE clause (looks at hackers).

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [GENERAL] Deleting one of 2 identical records

2011-09-06 Thread Thom Brown
On 6 September 2011 19:00, Gauthier, Dave dave.gauth...@intel.com wrote:

 The identification and deleting of the records using ctids seems to have
 worked fine.
 Thanks !


Alternative you could do something like this:

WITH keep AS (
SELECT
my_table AS duplicate_row,
min(ctid) AS keep,
count(*)
FROM my_table
GROUP BY my_table
HAVING count(*)  1
)
DELETE FROM my_table
USING keep
WHERE
my_table = keep.duplicate_row
AND
my_table.ctid != keep
RETURNING my_table.ctid, my_table.*;

This would delete all duplicate rows from the table and just keep whichever
row appears first in the table before its duplicates.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [GENERAL] Deleting one of 2 identical records

2011-09-06 Thread Thom Brown
On 7 September 2011 00:55, salah jubeh s_ju...@yahoo.com wrote:


 Hello Thom.

 what is the meaning of

 select table_name from table_name

 Also is this a common behavior of all Databases i.e. oracle , Microsoft
 ,...etc  . i.e is this is the standard behavior

 I think this is a good way to find duplicates in general, I will write a
 routine to compare all the columns by excluding the primary key which is
 serial

 Thanks in advance


Well I just put table_name as a placeholder for the actual name of the table
you wish to remove duplicates from.

No, you can't run this query on other databases, particularly because ctids
are specific to PostgreSQL.  Other databases will use different methods to
this one, if they have any at all.  With Oracle you'd probably use ROWNUM
somehow, and SQL Server will likely use some awful multi-query technique
involving creating temporary tables, copying distinct rows from the
duplicate set to another table, deleting it from the original and copying
back.  Can't say for sure though since I haven't used it in quite a while.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [GENERAL] Why schema of table is removed from explain?

2011-09-01 Thread Thom Brown
On 1 September 2011 19:08, hubert depesz lubaczewski dep...@depesz.comwrote:

 example:
 $ create schema x;
 CREATE SCHEMA

 $ create table x.y as select * from pg_class;
 SELECT 294

 $ explain select * from x.y limit 1;
 QUERY PLAN
 
  Limit  (cost=0.00..0.04 rows=1 width=189)
   -  Seq Scan on y  (cost=0.00..13.70 rows=370 width=189)
 (2 rows)

 Why it doesn't show Seq Scan on x.y ? it makes certain plans virtually
 useless, when you can't know which schema was used?!


You mean like this?

CREATE SCHEMA a;
CREATE SCHEMA b;
CREATE TABLE a.y (id serial, things int);
CREATE TABLE b.y (id serial, things int);
INSERT INTO a.y (things) SELECT x FROM generate_series(1,100,3) z(x);
INSERT INTO b.y (things) SELECT x FROM generate_series(1,100,5) z(x);

EXPLAIN SELECT * FROM a.y INNER JOIN b.y ON a.y.things = b.y.things;

  QUERY PLAN
--
 Hash Join  (cost=1.45..3.12 rows=20 width=16)
   Hash Cond: (a.y.things = b.y.things)
   -  Seq Scan on y  (cost=0.00..1.34 rows=34 width=8)
   -  Hash  (cost=1.20..1.20 rows=20 width=8)
 -  Seq Scan on y  (cost=0.00..1.20 rows=20 width=8)
(5 rows)

I agree, it's not helpful.  But EXPLAIN (VERBOSE) prefixes the schema:

EXPLAIN SELECT * FROM a.y INNER JOIN b.y ON a.y.things = b.y.things;

   QUERY PLAN

 Hash Join  (cost=1.45..3.12 rows=20 width=16)
   Output: a.y.id, a.y.things, b.y.id, b.y.things
   Hash Cond: (a.y.things = b.y.things)
   -  Seq Scan on a.y  (cost=0.00..1.34 rows=34 width=8)
 Output: a.y.id, a.y.things
   -  Hash  (cost=1.20..1.20 rows=20 width=8)
 Output: b.y.id, b.y.things
 -  Seq Scan on b.y  (cost=0.00..1.20 rows=20 width=8)
   Output: b.y.id, b.y.things
(9 rows)

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [GENERAL] ARRAY_AGG(DISTINCT a ORDER BY b) Error

2011-08-31 Thread Thom Brown
On 31 August 2011 23:54, David Johnston pol...@yahoo.com wrote:
 in an aggregate with DISTINCT, ORDER BY expressions must appear in argument
 list

 Why?

 If I add the fields of the ORDER BY expression to the DISTINCT clause I can
 no longer use DISTINCT since the ORDER BY values are not unique. Nor do I
 want the contents of the final ARRAY to contain the ORDER BY column.

 I presuppose this is a technical limitation since my particular use-case
 (and I've come across a few cases where this would be useful) doesn't seem
 that obscure.

 My specific sample query (use-case) is as follows:

 SELECT control, ARRAY_AGG(DISTINCT accountnumber ORDER BY amount DESC)
 FROM (VALUES ('A', '1000', 100), ('A', '1000', -50), ('A', '2000',200))
 accountdetail (control, accountnumber, amount)
 GROUP BY control

 I want to create an array of the unique account numbers associated with a
 control with the ordering of the array matching the order of the amounts.
 In this case I would want the output to be:

 (A, {'2000','1000'})

I'm not sure that makes sense.  If you're aggregating accountnumber as
an array of distinct values, what do you expect your query to output
if, say you had the following:

accountnumber, amount
1000,100
2000,200
1000,300

You've ordered by amount, but accountnumber has 2 identical values,
where the amount is less than the amount corresponding to
accountnumber 2000 in one instance, but greater in another.  Where
does 1000 appear?  Before or after 2000?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Not Seeing Syntax Error

2011-08-17 Thread Thom Brown
On 17 August 2011 16:49, Rich Shepard rshep...@appl-ecosys.com wrote:

  For several INSERT INTO rows I get a syntax error when the quant column is
 NULL for one specific parameter. I don't see my error. Here is an example
 row:

 psql:insert.sql:8: ERROR:  syntax error at or near ,
 LINE 1: ...ALUES ('9609-0759','BC-1.5','1996-09-19','**Arsenic',,'mg/L');
  ^
  The source line is:

 INSERT INTO chemistry (lab_nbr, loc_name, sample_date, param, quant,
 units)VALUES ('9609-0759','BC-1.5','1996-09-19','**Arsenic',,'mg/L');

 and the quant column is defined as type real. There are numerous other rows
 where quant IS NULL.

  What have I missed?


The error message points to the problem.  No value, not even NULL, has been
specified for 5th column.  Either put DEFAULT or NULL in there.  You can't
put nothing.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [GENERAL] include directives in postgresql.conf

2011-07-02 Thread Thom Brown
On 2 July 2011 10:42, AI Rumman rumman...@gmail.com wrote:
 Can anyone please tell me that how to use 'include directives' in
 Postgresql.conf?
 http://www.postgresql.org/docs/8.4/interactive/config-setting.html

Well it's quite clear from the documentation.  It's just:

include 'filename'

So if you have a config file in the same directory as postgresql.conf,
and you specify that it be included, it will be.  If it's not in the
same directory, give an absolute path.  Note that if a setting in the
include file is later specified in postgresql.conf after the include
directive, it will override the setting in the include.

But one example scenario you might wish to use is to leave
postgresql.conf as it is, put an include directive on the bottom line
to include another config file, and just use that to override settings
in postgresql.conf.  Another is that you may wish to keep contrib
module configuration in their own files, so you can add include
directives for each of those.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding Foreign Key Constraint To Existing Table

2011-07-01 Thread Thom Brown
On 1 July 2011 22:00, Rich Shepard rshep...@appl-ecosys.com wrote:
  I've examined the 9.0 manual page on alter table without seeing how to add
 a foreign key constraint to a column.

  I needed to make changes on a couple of existing tables which could be
 accomplished only by dropping the foreign key constraint. That, and changing
 the table structure, column names, and column types were successful. But, I
 am not seeing the proper syntax to add a foreign key constraint back to the
 two affected tables.

ALTER TABLE table_a ADD CONSTRAINT fk_name
FOREIGN KEY (column_of_table_a) REFERENCES table_b (column_of_table_b);

If in future you want foreign key checks to be deferred until the
transaction ends, you can add the DEFERRED keyword to the end.  This
will allow you to violate the foreign key temporarily, as long as you
resolve it before the end of the transaction.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding Foreign Key Constraint To Existing Table

2011-07-01 Thread Thom Brown
On 1 July 2011 22:00, Rich Shepard rshep...@appl-ecosys.com wrote:
  I've examined the 9.0 manual page on alter table without seeing how to add
 a foreign key constraint to a column.

  I needed to make changes on a couple of existing tables which could be
 accomplished only by dropping the foreign key constraint. That, and changing
 the table structure, column names, and column types were successful. But, I
 am not seeing the proper syntax to add a foreign key constraint back to the
 two affected tables.

By the way, rather than dropping the foreign key then recreating it,
you could always do this:

ALTER TABLE tablename DISABLE TRIGGER ALL;

Then it would ignore the foreign key trigger and you could put in
mischievous values... but remember to enable it again (replace DISABLE
with ENABLE).  You'll have to be a superuser to do it though.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In a view, how do I cause fields with possible NULLs to be treated as a blank string in a replace operation?

2011-06-27 Thread Thom Brown
On 27 June 2011 17:53, Asfand Qazi (Sanger Institute)
aq2.san...@gmail.com wrote:
 Hello,

 So I have am playing with a view to test the feasibility of a
 technique for storing some data.

 It basically goes something like this:

 CREATE VIEW formatted_table AS
       SELECT name,
              replace(some_template, '@', some_type) AS some_field
         FROM some_table;

 some_template is something like 'foo@bar' or 'foobar' (note the
 missing template character).

 some_type is a single letter like 'a' or 'b', or it can be NULL.

 The above view works fine for rows where some_type is a letter, and
 some_field ends up as 'fooabar' or whatever.

 However, when some_type is NULL, some_field ends up as NULL as well.
 I understand that this is expected behaviour, but how do I cause the
 view to treat a some_type of NULL as an empty string, so that
 some_field simply ends up as 'foobar'?

 Hope that was clear.

Try coalesce: 
http://www.postgresql.org/docs/9.0/static/functions-conditional.html#AEN15541

So if foo is a null value, and you used COALESCE(foo, 'bar'), the
output would be 'bar', otherwise it would be whatever the value of foo
is.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   >