[GENERAL] pg_dump throws too many command-line arguments in Postgres 10

2017-10-18 Thread Andrus
In Postgres 10   Windows 


invoking g_dump exe with

pg_dump.exe -b -f b.backup -Fc -h  -U admin -p 5432 mydb

causes error

pg_dump: too many command-line arguments (first is "-p")
Try "pg_dump --help" for more information.

How to fix this ?
In earlier versions it worked.

Andrus


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to get correct local time

2017-03-29 Thread Andrus


   select current_time at time zone 'GMT-2'

returns

"11:54:40.22045+02"

but correct local time in Windows is one hour different:

12:54

How to get correct local time ?


Using

"PostgreSQL 9.6.0, compiled by Visual C++ build 1800, 32-bit"

with  standard postgresql.conf file in Windows 10
Same issue occurs also in ealier Postgres and in earlier windows.

Server time in Windows is correct.

Daylight saving time was changed by one hour a week ago.
Maybe postgres didnt recognized it.

Posted also in

http://stackoverflow.com/questions/43090328/how-to-return-correct-local-time-in-postgres

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to parse xml containing optional elements

2016-08-12 Thread Andrus

Hi!

Thank you.


How to upgrade in Debian Squeeze ?
A plain "apt-get upgrade postgresql-9.1" does not work?
It might help to enable the postgresql.org APT repository. For 
instructions, see here:

https://www.postgresql.org/download/linux/debian/


apt-get upgrade postgresql-9.1 returns

Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these.
The following packages have unmet dependencies:
openssl : Depends: libssl1.0.0 (>= 1.0.1e-2+deb7u5) but it is not 
installable

wkhtmltox : Depends: libssl1.0.0 but it is not installable
E: Unmet dependencies. Try using -f.

so it looks like repository is found but not usable ?



How to add IBAN column to result table? This column has same value for
all rows.



SELECT

endaaa,
(xpath('ns:Amt/text()', x,nsa))[1]::text::numeric AS tasusumma,
(xpath('ns:NtryDtls/ns:TxDtls/ns:Refs/ns:EndToEndId/text()',
x,nsa))[1] AS orderinr

FROM (
SELECT
(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Acct/ns:Id/ns:IBAN/text()',
x,nsa))[1] as endaaa,
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry',
x,nsa)) as x, nsa
FROM t
) Ntry

Be careful, this will only work when there is exactly one ns:Stmt element 
in the document.
Else you will have to build a third query level, first selecting the 
ns:Stmt entries, second the IBAN and Ntry from them and third amount and 
EndToEndId.


Hopefully there is only one Stmt element in single file.

I solved it by moving xpath to select IBAN to main select:

SELECT
(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Acct/ns:Id/ns:IBAN/text()', 
xo,nsa))[1]::text AS endaaa,

(xpath('ns:Amt/text()', x,nsa))[1]::text::numeric AS tasusumma,
(xpath('ns:NtryDtls/ns:TxDtls/ns:Refs/ns:EndToEndId/text()',
x,nsa))[1] AS orderinr

FROM (
SELECT
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry',
x,nsa)) as x, nsa, x as xo
FROM t
) Ntry

This references endaaa from single select only. Changing code requires 
changing only one line.


Is this OK ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to parse xml containing optional elements

2016-08-12 Thread Andrus

Hi!


I couldn't really believe this so I just installed a VM and a 9.1

postgresql just to test this for you.

It seems you hit a bug in PostgreSQL prior to 9.1.15:
https://www.postgresql.org/docs/9.1/static/release-9-1-15.html
"Fix namespace handling in xpath() (Ali Akbar)
Previously, the xml value resulting from an xpath() call would not have 
namespace declarations if the namespace declarations were attached to an 
ancestor element in the input xml value, rather than to the specific 
element being returned. Propagate the ancestral declaration so that the 
result is correct when considered in isolation.

"
Given your current PG version, the queries will probably work if you remove 
the "ns:" parts of the first two xpaths like this:



SELECT
 (xpath('Amt/text()', x,nsa))[1]::text::numeric AS tasusumma,
 (xpath('NtryDtls/TxDtls/Refs/EndToEndId/text()', x,nsa))[1] AS orderinr
FROM (
 SELECT unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry',
 x,nsa)) as x, nsa
 FROM t
) Ntry
But that is not a good solution:
- when you eventually do upgrade, the query *will* break
- it depends on the exact location of the namespace declaration in the 
source document. Your bank might change that in a way that will be still 
perfectly valid, but break the assumptions made in that workaround.
So I suggest to upgrade to a supported version of the 9.1 branch from your 
more than 5 years old build (which should be easy to do).


Psotgres 9.1 run isn Debian Squeeze which is unsupported.
How to upgrade in Debian Squeeze ?

How to add IBAN column to result table? This column has same value for all 
rows.


create temp table t(x xml, nsa text[][]) on commit drop;
insert into t values(
'

 
   
 
   
 XX00221059842412
   
 
 
   150.00
   
 
   
 PV04131
   
 
   
 
 
   0.38
   
 
   
 2016080100178214-2
   
 
   
 
   
 

', ARRAY[ARRAY['ns','urn:iso:std:iso:20022:tech:xsd:camt.053.001.02']]);

SELECT
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Acct/ns:Id/ns:IBAN/text()', 
x,nsa))::text AS endaaa,
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry/ns:Amt/text()', 
x,nsa))::text::numeric AS tasusumma
, 
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry/ns:NtryDtls/ns:TxDtls/ns:Refs/ns:EndToEndId/text()', 
x,nsa))::text AS orderinr

FROM t;

should produce

endaaatasusumma  orderinr
XX00221059842412 150.00  PV04131
XX00221059842412   0.38  null

Andrus.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to parse xml containing optional elements

2016-08-11 Thread Andrus

Hi!

Thank you.
In "PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real 
(Debian 4.4.5-8) 4.4.5, 64-bit"


it returns two empty rows. How to make it work in this version ?

In "PostgreSQL 9.5.2, compiled by Visual C++ build 1800, 32-bit" it works.

Andrus.


-Algsõnum- 
From: Hannes Erven

Sent: Thursday, August 11, 2016 11:51 AM
To: Andrus
Cc: pgsql-general
Subject: Re: [GENERAL] How to parse xml containing optional elements

Hi Andrus,



SELECT
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry/ns:Amt/text()',
x,nsa))::text::numeric AS tasusumma
,
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry/ns:NtryDtls/ns:TxDtls/ns:Refs/ns:EndToEndId/text()',
x,nsa))::text AS orderinr
   FROM t;



You need to extract all ns:Ntry elements first, and then get the amount
and EndToEndId for each of them:

SELECT
(xpath('ns:Amt/text()', x,nsa))[1]::text::numeric AS tasusumma,
(xpath('ns:NtryDtls/ns:TxDtls/ns:Refs/ns:EndToEndId/text()', x,nsa))[1]
AS orderinr

FROM (
SELECT unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry',
x,nsa)) as x, nsa
FROM t
) Ntry




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to parse xml containing optional elements

2016-08-11 Thread Andrus
SEPA ISO XML transactions file needs to be parsed into flat table  in 
Postgres 9.1+ in ASP:NET 4.6 MVC controller.


I tried code below but this produces wrong result:

tasusumma  orderinr
   150.00  PV04131
 0.38  PV04131

Since there is no EndToEnd in second row there should be null in second row 
orderinr column. Correct result is:


tasusumma  orderinr
   150.00  PV04131
 0.38  null


How to fix this ?

   create temp table t(x xml, nsa text[][]) on commit drop;
   insert into t values(
   '
   
 
   
 
   150.00
   
 
   
 PV04131
   
 
   
 
 
   0.38
   
 
   
 2016080100178214-2
   
 
   
 
   
 
   
   ', ARRAY[ARRAY['ns','urn:iso:std:iso:20022:tech:xsd:camt.053.001.02']]);

   SELECT
   unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry/ns:Amt/text()', 
x,nsa))::text::numeric AS tasusumma
, 
unnest(xpath('/ns:Document/ns:BkToCstmrStmt/ns:Stmt/ns:Ntry/ns:NtryDtls/ns:TxDtls/ns:Refs/ns:EndToEndId/text()', 
x,nsa))::text AS orderinr

   FROM t;

Parsing can done in other ways e.q. using xslt stylesheet for tranformation 
or in client side ASP.NET 4.6 MVC if this is more reasonable.


Posted also in 
http://stackoverflow.com/questions/3739/how-to-parse-xml-with-optional-elements


Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to use row values as function parameters

2016-05-14 Thread Andrus
Hi!
Thank you.
>Use a CTE and move the function call to the select list - then explode the 
>result in the main query.
>Basically:
>WITH func_cte AS (
>SELECT func_call(tbl)
FROM tbl
)​ 
​>SELECT (func_call).*
>FROM func_cte;
​>​The parens are required to make the parser see func_call as a column name 
instead of a table name.​

I tried in 9.5

CREATE or replace FUNCTION crtKAIVE(
_doktyybid text default 'GVY'
)
RETURNS TABLE (
id integer
)
AS $f_crkaive$
select 1
$f_crkaive$ LANGUAGE sql STABLE;

create temp  table ko ( doktyyp text ) on commit drop;
insert into ko values ('G');


WITH func_cte AS (
SELECT crtKAIVE(ko.doktyyp)
FROM ko
)​ 
​SELECT (crtKAIVE).*
FROM func_cte;

but got strange error


ERROR:  syntax error at or near "​"
LINE 18: )​ 

How to fix ?

Andrus.


[GENERAL] How to use row values as function parameters

2016-05-14 Thread Andrus

Table ko should used to pass parameters to crtKAIVE() function.
ko has always single row.

I tried

CREATE or replace FUNCTION public.crtKAIVE(
_doktyybid text default 'GVY'
)
RETURNS TABLE (
id integer
)
AS $f_crkaive$
select 1
$f_crkaive$ LANGUAGE sql STABLE;

create temp  table ko ( doktyyp text ) on commit drop;
insert into ko values ('G');
select * from ko, crtkaive(ko.doktyyp)

but got error

   ERROR:  function expression in FROM cannot refer to other relations of 
same query level


How to fix this so that ko can used to pass parameters to crtkaive ?

Posted also in

http://stackoverflow.com/questions/37231624/how-to-use-table-row-values-as-function-parameters

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus

The example script works for me.  What PG version are you running?  I have
a vague recollection that we've fixed bugs-of-omission in DROP OWNED in
the past.


I'm using

"PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real
(Debian 4.4.5-8) 4.4.5, 64-bit"

In   "PostgreSQL 9.4.4, compiled by Visual C++ build 1800, 32-bit" it works.

It looks like in 9.1  reassign owned should replaced with revoke commands.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus
Hi!

Database idd owner is role idd_owner 
Database has 2 data schemas: public and firma1. 
User may have directly or indirectly assigned rights in this database and 
objects.
User is not owner of any object. It has only rights assigned to objects.

How to drop such  user ?

I tried

revoke all on all tables in schema public,firma1 from "vantaa" cascade;
revoke all on all sequences in schema public,firma1 from "vantaa" cascade;
revoke all on database idd from "vantaa" cascade;
revoke all on all functions in schema public,firma1 from "vantaa" cascade;
revoke all on schema public,firma1 from "vantaa" cascade;
revoke idd_owner from "vantaa" cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1 revoke all ON TABLES from 
"vantaa";
DROP ROLE if exists "vantaa"

but got error

role "vantaa" cannot be dropped because some objects depend on it
DETAIL:  privileges for schema public

in statement 

DROP ROLE if exists "vantaa"

How to fix this so that user can dropped ?

How to create sql or plpgsql method which takes user name as parameter and 
drops this user in all cases without dropping data ?
Or maybe there is some command or simpler commands in postgres ?


Using Postgres 9.1+
Posted also in 

http://stackoverflow.com/questions/32988702/how-to-drop-user-in-all-cases-in-postgres


Andrus.

Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus

Hi!

The objects can't be owned by nothing, so you will need to reassign 
ownership:

REASSIGN OWNED BY old_role TO new_role;
e.g.
REASSIGN OWNED BY vantaa TO postgres;
Then you can drop the role.


User who deletes other users is not superuser. It is created using

CREATE ROLE admin LOGIN
 NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION CONNECTION LIMIT 
100;

GRANT idd_owner TO admin;


I tried

REASSIGN OWNED BY vantaa TO postgres;

and

REASSIGN OWNED BY vantaa TO idd_owner;

but got  error

permission denied to reassign objects .

How to fix ?

I can add some rights to user who invokes this command if this helps.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus

Hi!

No. You need to be a superuser to reassign objects unless you own the 
object.

1. first connect as user postgres
2. REASSIGN all the tables owned by the missing user first.
3. Then you can drop the missing user AFTER you have reassigned all the 
objects they own.


Script

reassign owned by farukkugay to postgres;
drop user farukkugay ;

is running by superuser but it still causes the error.



You must also be a superuser to drop roles.


Non-superuser creates roles, assigns rights and drop users using scripts 
which I provided.

Those scripts work OK on most cases.

For some users, vantaa and farukkugan  delete script causes error which I 
described. For farukkugan it occurs also if running under superuser.


So it looks like it should be possible for non-superusers also.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus

Hi!


I tried
So to be clear admin is doing the below, correct?


Yes. I copied provided user definition which invokes delete command from 
pgadmin

code window for this user .


permission denied to reassign objects .
Is the above a blanket error or does it mention specific objects?


postgres log file contains two lines:

ERROR:  permission denied to reassign objects
STATEMENT:  reassign owned by vantaa to postgres

No objects are mentioned.
I can change log level if this helps.

What you are trying to do is reverse what you did to get the present setup. 
Do you have a record/script that shows what you did to create the role and 
assign it to the objects?


It was something like:

create script:

CREATE ROLE vantaa;
grant idd_owner to vantaa;

reset roles script:

revoke all on all tables in schema public,firma1 from vantaa cascade;
revoke all on all sequences in schema public,firma1 from vantaa cascade;
revoke all on database idd  from public,firma1 cascade;
revoke all on all functions in schema public,firma1 from vantaa cascade;
revoke all on schema public,firma1 from vantaa cascade;
revoke idd_owner  from vantaa cascade;
ALTER ROLE vantaa inherit NOCREATEROLE NOCREATEDB NOLOGIN;

grant all on all tables in schema public,firma1 to vantaa;
grant all on all sequences in schema public,firma1 to vantaa;
grant all on database idd  to vantaa;
grant all on schema public,firma1 to vantaa;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  GRANT all ON TABLES TO 
vantaa;

-- Restrict some tables:
revoke all on  kasutaja,kaspriv,logifail from vantaa cascade;
grant select on kaspriv,kasutaja to vantaa;
grant update (eesnimi, nimi,email,amet,islocked,telefon,language,vabakuup) 
on kasutaja to vantaa;

grant insert on logifail to vantaa;

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus

Hi!


Can you connect as user postgres? IE: psql -U postgres -d 


Applicaton has admin users which should be able to delete other users.
Those users dont have superuser rights.

I can connect as user postgres for testing only.
I'm looking for a way to delete users without superuser right.

If so, then you should have the ability to execute the commands without any 
problem.


I tried in database ktp :

reassign owned by farukkugay to postgres;
drop user farukkugay ;

This causes error

ERROR:  role "farukkugay" cannot be dropped because some objects depend on 
it

DETAIL:  privileges for schema public

So even superuser cannot delete.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus

A little further review shows that DROP OWNED is the way to get rid of
leftover privileges.  So in general you need to do REASSIGN OWNED to move
the ownership of objects, then DROP OWNED to get rid of privileges granted
on non-owned objects, before you can drop a role.


I tried this in database mydb using script below but still got error

ERROR:  role "vantaa" cannot be dropped because some objects depend on it
DETAIL:  privileges for database mydb

How to drop role?

Andrus.

set local role admin; -- admin is not superuser but is member of 
mydb_owner

CREATE ROLE vantaa;
grant mydb_owner to vantaa;

revoke all on all tables in schema public,firma1 from vantaa cascade;
revoke all on all sequences in schema public,firma1 from vantaa cascade;
revoke all on database mydb  from vantaa cascade;
revoke all on all functions in schema public,firma1 from vantaa cascade;
revoke all on schema public,firma1 from vantaa cascade;
revoke mydb_owner  from vantaa cascade;
ALTER ROLE vantaa inherit NOCREATEROLE NOCREATEDB NOLOGIN;

grant all on all tables in schema public,firma1 to vantaa;
grant all on all sequences in schema public,firma1 to vantaa;
grant all on database mydb  to vantaa;
grant all on schema public,firma1 to vantaa;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  GRANT all ON TABLES TO
vantaa;
revoke all on  kasutaja,kaspriv,logifail from vantaa cascade;
grant select on kaspriv,kasutaja to vantaa;
grant update (eesnimi, nimi,email,amet,islocked,telefon,language,vabakuup)
on kasutaja to vantaa;
grant insert on logifail to vantaa;

GRANT vantaa TO admin;
reassign owned by vantaa to mydb_owner;
drop owned by vantaa;
drop user vantaa; 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Andrus

ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  GRANT all ON TABLES TO
vantaa;

I am not sure that REASSIGN OWNED will get rid of default-privilege
specifiers --- you might have to reverse this step separately.
In general, REASSIGN OWNED has to be done by a role that has privileges
of (is a member of) both the source and target roles.  Superusers are
considered members of all roles, so that's how come it works for them.


I tried as superuser:

reassign owned by farukkugay to postgres;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma2 revoke all ON TABLES from 
farukkugay;

drop user farukkugay ;

but got error

ERROR: role "farukkugay" cannot be dropped because some objects depend on it
SQL state: 2BP01
Detail: privileges for schema public

How to to delete user ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to speed up delete where not in

2015-09-26 Thread Andrus

Hi!


I'm looking for a way to delete records which do not have child rows on big 
tables where lot of rows needs to be deleted. Both tables have lot of other 
foreign key references.



Document headers are in omdok table:

   create table omdok ( dokumnr serial primary key, ... );

Document rows are in omrid table

   CREATE TABLE omrid
   (
 id serial NOT NULL,
 reanr serial NOT NULL,
 dokumnr integer NOT NULL,
 CONSTRAINT omrid_pkey PRIMARY KEY (id),
 CONSTRAINT omrid_dokumnr_fkey FOREIGN KEY (dokumnr)
 REFERENCES omdok (dokumnr) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY 
IMMEDIATE,

 
   );


I tried

   delete from omdok where dokumnr not in  (select dokumnr from omrid)

Query it is running currently 15 hours and is still running.
postgres.exe is using 50% CPU  all the time (this is 2 core CPU).

   explain delete from omdok where dokumnr not in  (select dokumnr from 
omrid)


returns:

   "Delete  (cost=0.00..21971079433.34 rows=220815 width=6)"
   "  ->  Seq Scan on omdok  (cost=0.00..21971079433.34 rows=220815 
width=6)"

   "Filter: (NOT (SubPlan 1))"
   "SubPlan 1"
   "  ->  Materialize  (cost=0.00..94756.92 rows=1897261 width=4)"
   "->  Seq Scan on omrid  (cost=0.00..77858.61 
rows=1897261 width=4)"


- How to delete parents without child rows fast?
- Will this command finish or is postgres hanging ?
- Currently it is running 15 hours. How many hours it takes to finish ?
How to speed up this query ?


Using

   PostgreSQL 9.0.1, compiled by Visual C++ build 1500, 64-bit

   Windows 2003 x64 server with 4 GB RAM.


Posted also in

http://stackoverflow.com/questions/32794828/how-to-speed-up-deleting-documents-without-rows

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to use record variable with non-null domain in plpgsql

2015-08-22 Thread Andrus

Hi!


TBH, the problem here is with the not-null constraint on the domain.
Get rid of that and you'll be much happier.  


Is the only reasonable way is to change domain using 


ALTER DOMAIN tebool DROP  NOT NULL   ?

bool types of columns are never used in database. Instead of them tebool type 
is alway used.

There are alrge number of tebool columns in database. In different installations there may be additional tebool columns 
not know at design time.

Will everything work after such change ?

How to disable null values in tebool columns then ?
Should script created which loops over all tebool columns in all tables and 
adds not null constraint to them ?

Or is there some better solution ?


Data types that try to
insist on not being NULL are fundamentally incompatible with SQL
semantics --- to take one example, what do you think will happen
to a column of such a type when it's on the outside of a LEFT JOIN?


I tried:

create temp table test  ( test tebool, test2 numeric ) on commit drop ;
create temp table test1  ( test tebool, test2 numeric ) on commit drop ;
insert into test values (false,1);

select test1.test  from test left join test1 on false;

query returns null value. 


I also tried to cast result to tebool

select test1.test::tebool  from test left join test1 on false;

and 


select null::ebool

This returns also null. So Postgres allows null values in this type instance.
There are no problems.


Andrus.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to use record variable with non-null domain in plpgsql

2015-08-22 Thread Andrus

I'm looging for a way to use tebool type records in plpgsql method starting 
from Postgres 9.1
I tried code below but got error

domain tebool does not allow null values
Domain tebool default value is false so plpgsql must assing false to it and 
should not throw error.

How to fix this so that such record variable can created ?

Andrus.

CREATE DOMAIN tebool AS bool DEFAULT false NOT NULL;
create temp table test  ( test tebool ) on commit drop ;
   
CREATE OR REPLACE FUNCTION test()

  RETURNS numeric AS $$
   
   DECLARE

   r_test test;
   begin
   
   return 0;

   end;  $$ language plpgsql;
   
   select test();


Posted also in 


http://stackoverflow.com/questions/32157166/how-to-fix-domain-tebool-does-not-allow-null-values-in-plpgsql-function


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting xml to table with optional elements

2014-11-28 Thread Andrus

Hi!

You have to process this in two passes. First pass you create a table of 
documents by unnesting the non-optional Document elements. Second pass you 
explode each individual row/document on that table into its components.


Thank you. I tried code below.  John Smith appears in result as {John 
Smith}

How to force it to appear as John Smith ?

Can this code improved, for example, merging create temp table ... select 
and update into single statement ?


Andrus.

   create temp table t(x xml) on commit drop;
   insert into t values('?xml version=1.0 encoding=UTF-8?
   E-Document
 Document
  DocumentParties

 BuyerParty context=partner
   ContactData
 ActualAddress
   PostalCode99/PostalCode
 /ActualAddress
 ContactFirstNameJohn Smith/ContactFirstName
   /ContactData
 /BuyerParty

  /DocumentParties
   DocumentInfo
 DocumentNum123/DocumentNum
   /DocumentInfo
   DocumentItem
 ItemEntry
   SellerItemCode999/SellerItemCode
   ItemReserve
 LotNum(1)/LotNum
 ItemReserveUnit
   AmountActual3.00/AmountActual
 /ItemReserveUnit
   /ItemReserve
 /ItemEntry
 ItemEntry
   SellerItemCode888/SellerItemCode
   ItemReserve
 LotNum(2)/LotNum
 ItemReserveUnit
   AmountActual3.00/AmountActual
 /ItemReserveUnit
   /ItemReserve
 /ItemEntry
   /DocumentItem
 /Document
   /E-Document
   '::xml);


create temp table temprid on commit drop as
   SELECT
   unnest(xpath('/E-Document/Document/DocumentInfo/DocumentNum/text()', 
x))::text AS docnumber,

   null::text as ContactFirstName,
   unnest(xpath('/E-Document/Document/DocumentItem/ItemEntry/SellerItemCode/text()', 
x))::text AS itemcode

   FROM t;

update temprid set ContactFirstName =xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x)::text
from t ;

select * from temprid



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting xml to table with optional elements

2014-11-28 Thread Andrus
Hi!

Thank you.
Subquery the xpath expression to unnest it and apply a LIMIT 1
 UPDATE tbl SET ... = (SELECT xpath( tbl.???[...] ) LIMIT 1)
I used unnest() :

update temprid set 
  ContactFirstName =unnest(xpath(
 
'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x))::text

Is this OK ?


Note that I do not believe your example code is going to work.  As I mentioned 
you really want to create a table of documents and NOT try to pair up 
multiple unnested columns.
How to create table of documents ?
xml contains multiple products and document info.
Code creates table containing one row for every product and adds same header 
fields to all rows.
Whu this will not work ?
Andrus.

Re: [GENERAL] Converting xml to table with optional elements

2014-11-28 Thread Andrus

Hi!

Thank you.

Instead of defining an xpath for fields define one that captures the xml 
pertaining to the data that would belong to
a single record.How to create single xpath or xsl which assigns values to all columns in 
Postgres table ?
I havent found such sample. Samples which I have found create every column 
separately using separate xpath.


Andrus 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Converting xml to table with optional elements

2014-11-27 Thread Andrus

How to convert xml to table if some elements are optional in xml ?

In XML

/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName
element is optional.

If this is present, code below works OK.
If ContactFirstName is not present , empty table is returned.

How to extract product code rows if ContactFirstName element is missing ?
In result ContactFirstName column should have null on other value.

Using Postgres 9.1

Andrus.

Testcase :

   create temp table t(x xml) on commit drop;
   insert into t values('?xml version=1.0 encoding=UTF-8?
   E-Document
 Document
  DocumentParties
   BuyerParty
PartyCodeTEST/PartyCode
NameTEST/Name
   /BuyerParty
  /DocumentParties
   DocumentInfo
 DocumentNum123/DocumentNum
   /DocumentInfo
   DocumentItem
 ItemEntry
   SellerItemCode999/SellerItemCode
   ItemReserve
 LotNum(1)/LotNum
 ItemReserveUnit
   AmountActual3.00/AmountActual
 /ItemReserveUnit
   /ItemReserve
 /ItemEntry
 ItemEntry
   SellerItemCode888/SellerItemCode
   ItemReserve
 LotNum(2)/LotNum
 ItemReserveUnit
   AmountActual3.00/AmountActual
 /ItemReserveUnit
   /ItemReserve
 /ItemEntry
   /DocumentItem
 /Document
   /E-Document
   '::xml);

   SELECT
   unnest(xpath('/E-Document/Document/DocumentInfo/DocumentNum/text()',
x))::text AS docnumber,
   unnest( xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',
x))::text AS ContactFirstName,
   
unnest(xpath('/E-Document/Document/DocumentItem/ItemEntry/SellerItemCode/text()',
x))::text AS itemcode
   FROM t


Posted it also in

http://stackoverflow.com/questions/27171210/how-to-convert-xml-to-table-if-node-does-not-exist-in-postgres 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting char to varchar automatically

2014-11-01 Thread Andrus

Hi!


That looks sane, though you didn't need the WITH.


I changed select to code below.
If same table name appears in multiple schemas, it generates duplicate alter 
column clauses which cause error.

How to fix it to generate proper sql ?

I added n.nspname='myschame' as shown in code below but problem persists.

Andrus.

SELECT 'ALTER TABLE ' || quote_ident(n.nspname) || '.'
   || quote_ident(c.relname) || ' ' ||
 string_agg(
 ' ALTER COLUMN ' || quote_ident(a.attname) || ' TYPE varchar(' || 
i.character_maximum_length ||')',

 ',' ) || ';' as statement
 FROM pg_class c
   JOIN pg_namespace n ON n.oid = c.relnamespace
   JOIN pg_attribute a ON a.attrelid = c.oid
   JOIN pg_type t ON t.oid = a.atttypid
   JOIN information_schema.columns i ON (i.table_name = c.relname AND 
i.column_name = a.attname)

WHERE t.typname = 'bpchar'
  AND c.relkind = 'r'
  AND n.nspname  'pg_catalog' and not attisdropped
and n.nspname='myschema'
group by n.nspname, c.relname 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-29 Thread Andrus

Hi!


A further tweak; add the following to the WHERE clause:
AND EXTRACT(DOW FROM times.period) != 0


I changed it to isodow to work in any locale.

Your solution is more readable but Erwin answer in SO looks more optimized. 
I have 10 workes and 1 month reservation with 15 minute offsess from 8 to 
20:00, so perfomance is hopafully not an issue. Which to use ?


Using current_date in check constraint causes database restore failure.
not valid should be added or this check should be moved to holydays check 
trigger.


Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-28 Thread Andrus

Hi!

I'm looking for finding ealiest possible start times from reservations 
table.


People work from 10:00AM to 21:00PM in every week day except Sunday and 
public holidays.


Jobs for them are reserved at 15 minute intervals and whole job must fit to 
single day.

Job duration is from 15 minutes to 4 hours.

Reservat table contains reservations, yksus2 table contains workes and
pyha table contains public holidays. Table structures are below. Reservat 
structure can changed if this helps.


How to first earliest 30 possible start times considering existing 
reservations ?


For example, Mary has already reservation at 12:30 .. 16:00 and
John has already reservation at 12:00 to 13:00

In this case query for job with duration of 1.5 hours should return

   John 2014-10-28 10:00
   Mary 2014-10-28 10:00
   John 2014-10-28 10:30
   Mary 2014-10-28 10:30
   Mary 2014-10-28 11:00
   John 2014-10-28 13:00
   Mary 2014-10-28 16:00
   Mary 2014-10-28 16:30
   ... etc and also starting from next days

I tried query based on answer in 
http://stackoverflow.com/questions/13433863/how-to-return-only-work-time-from-reservations-in-postgresql 
below but it returns wrong result:


   MARY  2014-10-28 13:00:00
   MARY  2014-10-29 22:34:40.850255
   JOHN  2014-10-30 22:34:40.850255
   MARY  2014-10-31 22:34:40.850255
   MARY  2014-11-03 22:34:40.850255

Also sliding start times 10:00, 10:30 etc are not returned.

How to get proper first reservations ?

Query which I tried is

   insert into reservat (objekt2, during) values
   ('MARY', '[2014-10-28 11:30:00,2014-10-28 13:00:00)'),
   ('JOHN', '[2014-10-28 10:00:00,2014-10-28 11:30:00)');

   with gaps as (
   select
   yksus,
   upper(during) as start,
   lead(lower(during),1,upper(during)) over (ORDER BY during) - 
upper(during) as gap

   from (
   select
  yksus2.yksus,
  during
 from reservat join yksus2 on reservat.objekt2=yksus2.yksus
 where  upper(during)= current_date
   union all
   select
   yksus2.yksus,
   unnest(case
   when pyha is not null then array[tsrange1(d, d + 
interval '1 day')]
   when date_part('dow', d) in (0, 6) then 
array[tsrange1(d, d + interval '1 day')]

   when d::date =  current_Date then array[
   tsrange1(d, current_timestamp ),
   tsrange1(d + interval '20 hours', d + 
interval '1 day')]

   else array[tsrange1(d, d + interval '8 hours'),
  tsrange1(d + interval '20 hours', d + 
interval '1 day')]

   end)
   from yksus2, generate_series(
   current_timestamp,
   current_timestamp + interval '1 month',
   interval '1 day'
   ) as s(d)
   left join pyha on pyha = d::date
   ) as x
   )

   select yksus, start
 from gaps
   where gap = interval'1hour 30 minutes'
   order by start
   limit 30


Schema:

   CREATE EXTENSION btree_gist;
   CREATE TABLE Reservat (
 id serial primary key,
 objekt2 char(10) not null references yksus2 on update cascade 
deferrable,

 during tsrange not null check(
lower(during)::date = upper(during)::date
and lower(during) between current_date and current_date+ 
interval'1 month'


and (lower(during)::time = '10:00'::time and 
upper(during)::time  '21:00'::time)

AND EXTRACT(MINUTE FROM lower(during)) IN (0, 15, 30,45)
 AND EXTRACT(MINUTE FROM upper(during)) IN (0, 15, 30, 45)
and (date_part('dow', lower(during)) in (1,2,3,4,5,6)
and date_part('dow', upper(during)) in (1,2,3,4,5,6))
 ),

 EXCLUDE USING gist (objekt2 WITH =, during WITH )
   );

   create or replace function holiday_check() returns trigger language 
plpgsql stable as $$

   begin
   if exists (select * from pyha  where pyha in 
(lower(NEW.during)::date, upper(NEW.during)::date)) then

   raise exception 'public holiday %', lower(NEW.during) ;
   else
   return NEW;
   end if;
   end;
   $$;

   create trigger holiday_check_i before insert or update on Reservat for 
each row execute procedure holiday_check();


   CREATE OR REPLACE FUNCTION public.tsrange1(start timestamp with time 
zone,

   finish timestamp with time zone ) RETURNS tsrange AS
   $BODY$
   SELECT tsrange(start::timestamp without time zone, finish::timestamp 
without time zone );

   $BODY$ language sql immutable;


   -- Workers
   create table yksus2( yksus char(10) primary key);
   insert into yksus2 values ('JOHN'), ('MARY');

   -- public holidays
   create table pyha( pyha date primary key);


I posted it also in 
http://stackoverflow.com/questions/26608683/how-to-find-first-free-start-times-from-reservations-in-postgres


Andrus

Re: [GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-28 Thread Andrus
Hi!

Would you be able to adapt this to your needs?:
Thank you very much. Great solution.
I refactored it as shown below.
Query returns only dates for single day. Changing limit clause to 300 does not 
return next day.
How to return other day dates also, excluding sundays and public holidays in 
pyha table ?
Andrus.
Testcase is:
create table pyha (pyha date primary key);
insert into pyha(pyha) values('2014-10-29');
create table  yksus2(yksus char(10) primary key);
insert into yksus2 values ('JOHN'),('MARY');
CREATE EXTENSION btree_gist;

CREATE TABLE reservat
(
  reservat_id serial primary key,
  objekt2 char(10) not null references yksus2 on update cascade deferrable,
during tstzrange not null,
EXCLUDE USING gist (objekt2 WITH =, during WITH ),

CONSTRAINT same_date
 CHECK (lower(during)::date = upper(during)::date),

CONSTRAINT max_1month_future 
 CHECK (lower(during) between current_date and current_date+ interval'1 
month' ),

CONSTRAINT time_between_1000_and_2100
 CHECK (lower(during)::time = '10:00'::time and upper(during)::time  
'21:00'::time),

CONSTRAINT lower_bound_included
 CHECK (lower_inc(during)),

CONSTRAINT upper_bound_excluded
 CHECK (not upper_inc(during)),

CONSTRAINT start_time_at_15minute_offset
 CHECK (EXTRACT(MINUTE FROM lower(during)) IN (0, 15, 30,45)),
-- or (extract(epoch from lower(during)::time)::int % (60*15) = 0)

CONSTRAINT end_time_at_15minute_offset
 CHECK (EXTRACT(MINUTE FROM upper(during)) IN (0, 15, 30,45)),

CONSTRAINT duration_between_15min_and_4hours
 CHECK (upper(during) - lower(during) between '15 mins'::interval and '4 
hours'::interval),

CONSTRAINT exclude_sundays
 CHECK (date_part('dow', lower(during)) in (1,2,3,4,5,6) )
);

create or replace function holiday_check() returns trigger language plpgsql 
stable as $$
begin
if exists (select * from pyha  where pyha between 
lower(NEW.during)::date and upper(NEW.during)::date) then
raise exception 'public holiday %', lower(NEW.during) ;
else
return NEW;
end if;
end;
$$;

create trigger holiday_check_i before insert or update on Reservat for each row 
execute procedure holiday_check();
INSERT INTO reservat (objekt2, during)
  VALUES ('MARY','[2014-10-28 11:30+2,2014-10-28 13:00+2)'::tstzrange);
INSERT INTO reservat (objekt2, during)
  VALUES ('JOHN','[2014-10-28 10:00+2,2014-10-28 11:30+2)'::tstzrange);

SELECT yksus2.yksus, times.period
FROM generate_series('2014-10-28 10:00+2'::timestamptz, '2014-10-28 21:00+2', 
'15 mins'::interval) times(period)
CROSS JOIN yksus2
LEFT JOIN reservat ON tstzrange(times.period,times.period + '1 hour 30 
mins'::interval, '[)')  reservat.during AND yksus2.yksus = reservat.objekt2
WHERE reservat.during IS NULL
ORDER BY 2, 1
LIMIT 300;

Re: [GENERAL] How to find earlest possible start times for given duration excluding reservations

2014-10-28 Thread Andrus
Hi!
A correction to this.  As it stands, it will show times like the following:
Thank you.
I posted your solution as alternative to Erwin answer in 
http://stackoverflow.com/questions/26608683/how-to-find-first-free-start-times-from-reservations-in-postgres
Andrus.
 

Re: [GENERAL] Converting char to varchar automatically

2014-10-08 Thread Andrus
Hi!

also, it generates statement which tries to change all columns to one character 
length columns.

Andrus.


From: Andrus 
Sent: Monday, October 06, 2014 8:11 PM
To: Melvin Davidson 
Cc: pgsql-general@postgresql.org 
Subject: Re: [GENERAL] Converting char to varchar automatically

Hi!
SELECT 'ALTER TABLE ' || quote_ident(n.nspname) || '.' ||  
quote_ident(c.relname)
   || ' ALTER COLUMN ' || quote_ident(a.attname) || ' TYPE varchar;'
  FROM pg_class c 
  JOIN pg_namespace n ON n.oid = c.relnamespace
  JOIN pg_attribute a ON a.attrelid = c.oid
  JOIN pg_type t ON t.oid = a.atttypid 
 WHERE t.typname = 'char' 
   AND n.nspname  'pg_catalog';

It does not return any data.
Andrus.


Re: [GENERAL] Converting char to varchar automatically

2014-10-08 Thread Andrus

Hi!

Using Toms recommendation I added  not attisdropped and now got the query

SELECT 'ALTER TABLE ' || quote_ident(n.nspname) || '.'
|| quote_ident(c.relname)
|| ' ALTER COLUMN ' || quote_ident(a.attname) || ' TYPE
varchar;'
  FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
JOIN pg_type t ON t.oid = a.atttypid
WHERE t.typname = 'bpchar'
   AND c.relkind = 'r'
   AND n.nspname  'pg_catalog' and not attisdropped;

Will this create commands which replace all user-defined char  things in
database  to varchar ?

TYPE varchar creates single character column so most alter table command
will fail.
How to change this so that original char column width is kept ?
I looked into tables used in this query but havent found column which holds
char column defined width.
How get it or is it better to re-write this query using informational_schema
?

How to change this query so that it creates single alter table command for
every table
(with multiple alter column clauses) to increase conversion speed ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting char to varchar automatically

2014-10-08 Thread Andrus
Hi!

Thank you.

This revised query should give you what you need:
SELECT 'ALTER TABLE ' || quote_ident(n.nspname) || '.'
|| quote_ident(c.relname)
|| ' ALTER COLUMN ' || quote_ident(a.attname) || ' TYPE varchar(' 
 || i.character_maximum_length || ');'
  FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
JOIN pg_type t ON t.oid = a.atttypid
JOIN information_schema.columns i ON (i.table_name = c.relname AND 
 i.column_name = a.attname)
WHERE t.typname = 'bpchar'
   AND c.relkind = 'r'
   AND n.nspname  'pg_catalog' and not attisdropped;

How to create single alter table command for every table ?
Can we use string concat aggregate function or window functions or plpgsql or 
something other ?

Andrus.

Re: [GENERAL] Converting char to varchar automatically

2014-10-08 Thread Andrus
Hi!

There really is no easy way to make a single ALTER for each table unless you 
use a programming language. 

I’snt SQL a programming language ?

However, adding a  GROUP BY c.relname,  a.attname
would certainly simplify editing. Then you can combine all the
ALTER COLUMN's for each table.

I wrote

with stem as (
SELECT 'ALTER TABLE ' || quote_ident(n.nspname) || '.'
|| quote_ident(c.relname) as prefix ,
  string_agg(
  ' ALTER COLUMN ' || quote_ident(a.attname) || ' TYPE varchar(' || 
i.character_maximum_length ||')',
  ',' ) as body
  FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
JOIN pg_type t ON t.oid = a.atttypid
JOIN information_schema.columns i ON (i.table_name = c.relname AND 
i.column_name = a.attname)
WHERE t.typname = 'bpchar'
   AND c.relkind = 'r'
   AND n.nspname  'pg_catalog' and not attisdropped
group by 1
)

select prefix || ' '|| body || ';' as statement
from stem

Is this prefect ?

Andrus.

[GENERAL] Converting char to varchar automatically

2014-10-06 Thread Andrus
Database contains about 300 tables.
Most of them contain columns of char(n) type.

How to convert all those columns to varchar automatically ?

Is it possible to run some update commands in system tables for this ?
Or is it possible to create pgsql script which creates dynamically alter table 
alter column commands and PERFORMs them ?

Any tables have primary keys with char(n) columns and foreign keys on them. 
Foreign keys are deferrable and initially  immediate.
Will foreign keys allow to perform such alter table alter column commands ?
Or is there better way.

Andrus.

Re: [GENERAL] Converting char to varchar automatically

2014-10-06 Thread Andrus
Hi!
SELECT 'ALTER TABLE ' || quote_ident(n.nspname) || '.' ||  
quote_ident(c.relname)
   || ' ALTER COLUMN ' || quote_ident(a.attname) || ' TYPE varchar;'
  FROM pg_class c 
  JOIN pg_namespace n ON n.oid = c.relnamespace
  JOIN pg_attribute a ON a.attrelid = c.oid
  JOIN pg_type t ON t.oid = a.atttypid 
 WHERE t.typname = 'char' 
   AND n.nspname  'pg_catalog';

It does not return any data.
Andrus.


Re: [GENERAL] How to find greatest record before known values fast

2014-10-04 Thread Andrus

Hi!

Thank you.

People often looks for different period sales using different filters.
There are lot of sales and every sale is individual record in sales table.
So increasing sequential scan speed is important.

I tried

create table t1(v char(100), p numeric(12,5));
create table t2(v varchar(100), p numeric(12,5));
insert into t1 select '', generate_series from generate_series(1,100);
insert into t2 select '', generate_series from generate_series(1,100);

and after that measured speed of

select sum(p) from t1

and

select sum(p) from t2

both of them took approximately 800 ms

So it looks like thee is no difference in sequential scan speed and thus no 
need to change char types.


Andrus 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-04 Thread Andrus

Hi!

Thank you.

In my db people often looks for different period sales using different
filters and will sum
There are lot of sales and every sale is individual record in sales table.
So increasing sequential scan speed is important.

I tried

create table t1(v char(100), p numeric(12,5));
create table t2(v varchar(100), p numeric(12,5));
insert into t1 select '', generate_series from generate_series(1,100);
insert into t2 select '', generate_series from generate_series(1,100);

and after that measured speed of

select sum(p) from t1

and

select sum(p) from t2

Both of them took approximately 800 ms

Also  select max(length(v)) from t1 and select max(length(v)) from  t2

speed is the same

Also I tested with 160 rows production char(100) type column database by
running

select sum(decimalcolumn) from testtable

before and after running

alter table testtable alter char100column type varchar(100)

Select sum command tooks 1700 ms in both cases.

So there is no difference in sequential scan speed.
Replacing char with varchar requires re-writing some parts of code.
Disk space is minor issue compared to cost of code-rewrite.
It looks like it is not reasonable to replace char with varchar.

Andrus 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Andrus

Hi!

So kellaaeg is a time? Your best bet here would be to create an index that 
is an actual timestamp comprised of both kuupaev and kellaaeg. You could 
do this with to_timestamp by concatinating both fields together, or it may 
be easier to replace the space in kellaaeg with a colon and cast it to 
time, then add the two:

  kuupaev + replace( kellaaeg, ' ', ':' )::time
I know you can't alter the table, but can you create a view on top of the 
table? If you did that, you could have a real timestamp field in the view 
that is calculated from kuupaev and kellaaeg and you can create a 
functional index that uses the same calculation. That would be the easiest 
way to use this.


Thank you.
I solved this by creating composite index on 3 columns and re-writing query 
as Tom recommended.

It looks like Tom's recommendation is simpler for me.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Andrus

Hi!

Thank you for explanations.


 the char type pads out the fields on disk.


It looks like you wrote that char takes more disk space.

from

http://www.pgcon.org/2013/schedule/attachments/269_tour-of-postgresql-data-types.pdf

page 28:

Unlikemany
databases,char(n)isNOTstoredasafixed-sizedfield 
inPostgres.Itistreatedexactlythesameas 
varchar(n)exceptforbeingpadded


So char type does not take more space than varchar.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to find greatest record before known values fast

2014-10-02 Thread Andrus
I’m looking for a way to increase select statement speed in Postgres 9.0.

Table has required index present. Desired result can obtained using index 
(kuupaev,kellaaeg) immediately.
However Postgres scans all rows:

explain analyze SELECT 
max( kuupaev||kellaaeg ) as res
  from firma2.ALGSA 
  where laonr=1 and kuupaev =current_date and 
 (kuupaev,kellaaeg) = ( current_date, '23 59'  )

Aggregate  (cost=6932.65..6932.67 rows=1 width=10) (actual 
time=1608.590..1608.592 rows=1 loops=1)
  -  Seq Scan on algsa  (cost=0.00..6571.49 rows=144464 width=10) (actual 
time=0.032..922.431 rows=144458 loops=1)
Filter: ((laonr = 1::numeric) AND (kuupaev = ('now'::text)::date) 
AND (ROW(kuupaev, kellaaeg) = ROW(('now'::text)::date, '23 59'::bpchar)))
Total runtime: 1608.846 ms

In real query instead of 1, current_date and '23 59' there are variable 
parameters.

Table has both indexes present but postgres will not use them. 
Indexes can changed and query can re-written if this helps. 
Table structure cannot changed. char columns cannot replaced with varchar 
columns. kuupaev must be date and kellaaeg must be char(5) type.

Query contains reduntant condition `kuupaev =current_date` but index is still 
not used.

I tried also `SELECT max( (kuupaev,kellaaeg ))` but got error that max() 
function does not exist.

How to speed this query ?


Table structure is :

CREATE TABLE firma2.algsa
(
  id serial NOT NULL,
  laonr numeric(2,0),
  kuupaev date NOT NULL,
  kellaaeg character(5) NOT NULL DEFAULT ''::bpchar,
  osak character(10) NOT NULL,
  toode character(20) NOT NULL,
  partii character(15),
  kogus numeric(12,4) NOT NULL DEFAULT 0,
  hind numeric(15,5) NOT NULL DEFAULT 0,
  kulum numeric(15,5) NOT NULL DEFAULT 0,
  tegkogus numeric(12,4),
  stkuupaev date,
  klient character(12),
  masin character(5),
  CONSTRAINT algsa_pkey PRIMARY KEY (id)
);


CREATE INDEX algsa_kuupaev_idx
  ON firma2.algsa
  USING btree
  (kuupaev);

CREATE INDEX algsa_kuupaev_kellaaeg_idx
  ON firma2.algsa
  USING btree
  (kuupaev, kellaaeg);

using

PostgreSQL 9.0.3, compiled by Visual C++ build 1500, 32-bit

Posted also in

http://stackoverflow.com/questions/26165745/how-find-greatest-tuple-before-given-2-column-tuple-in-postgres-fast

Andrus.

[GENERAL] Creating index on concatenated char columns fails is Postgres 9 (regression)

2014-10-02 Thread Andrus
Steps to reproduce:

Run commands

create temp table test (kuupaev date, kellaaeg char(5)  ) on commit drop;
create index test on test ((kuupaev||kellaaeg)); 

in 

PostgreSQL 9.3.4, compiled by Visual C++ build 1600, 32-bit

Observed result:

ERROR:  functions in index expression must be marked IMMUTABLE

In 

PostgreSQL 8.4.4, compiled by Visual C++ build 1400, 32-bit

those commands work OK.

I need this index to speed up query

SELECT 
max( kuupaev||kellaaeg ) 
  from ALGSA 
  where laonr=?nlaonr and kuupaev =?prmLOPP and kuupaev||kellaaeg = ?someparam

How to fix or other way to speed this query?

Posted also in  
http://stackoverflow.com/questions/26161561/how-to-create-composite-index-in-postgres-9

Andrus.

[GENERAL] How to use recursive clause in one with query

2014-08-08 Thread Andrus
How to use one recursive query if there are may queries in WITH statement ?

I tried 

with
a as ( select 1 as col1 ),
RECURSIVE t(n) AS (
VALUES (1)
  UNION ALL
SELECT n+1 FROM t WHERE n  100
),
c as (select * from t)
  select * from c

but got error 

syntax error at or near t

at line   

RECURSIVE t(n) AS (

recursive b as ( shown in comment 

with clause is used to create some non recursive queries (a) .
After them recursive query is defined (b) and after it there are some other 
non-recursive queries (c)

Using Postgres 9.1 and above.

Andrus.

Re: [GENERAL] String concatenation operator which keeps trailing spaces in CHAR(n) columns

2014-07-31 Thread Andrus

Hi,


hm, why do that at all?   how about avoid the char() type and create
views over tables using rpad when you want space padding:
create view v_foo as
 select *, rpad(f, 50, ' ') as f_padded;


I'm creating a converter which converts Visual FoxPro expressions to 
Postgres at runtime.

FoxPro expression a+b  produces trailing spaces after a  .
To get same result I need to + or other operator with this behaviour.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] String concatenation operator which keeps trailing spaces in CHAR(n) columns

2014-07-30 Thread Andrus

How to create string concatenation operator which preserves trailing spaces
on CHAR(n) type columns ?

I tried code below, but it returns AB (without spaces).
How to force it to return A B (keep space after A) ?

Andrus.

CREATE OR REPLACE FUNCTION public.stringconcat(left text, right text)
RETURNS text
LANGUAGE sql IMMUTABLE
AS $BODY$
SELECT concat($1, $2) ;
$BODY$;

CREATE OPERATOR public.+ (
   leftarg = text,
   rightarg = text,
   procedure = public.stringconcat
);

create temp table test (col1  char(2)) on commit drop;
insert into test values ('A');
select col1 +  'B'
  from test;


I posted similar question also in

http://stackoverflow.com/questions/24975118/how-to-create-string-concatenation-operator-which-preserves-trailing-spaces-in-c 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] String concatenation operator which keeps trailing spaces in CHAR(n) columns

2014-07-30 Thread Andrus

Hi!


Use bpchar instead of text in the definition of function and operator.
Otherwise col1 gets cast to text and loses its trailing spaces.


Thank you very much.
It worked.
Which notation to use for this ?

Is it reasonable use + as such operator for strings or should some other 
notation used ?


I tried:

CREATE OR REPLACE FUNCTION public.concatkeepspaces(left bpchar, right 
bpchar)

RETURNS bpchar
LANGUAGE sql IMMUTABLE
AS $BODY$
SELECT concat($1,$2);
$BODY$;

CREATE OPERATOR public.+ (
   leftarg = bpchar,
   rightarg = bpchar,
   procedure = public.concatkeepspaces
);

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-09 Thread Andrus

Where to get this fix in binary form for Windows 32-bit ?

Here, but you will need to wait until 9.3.5 is out:
http://www.postgresql.org/download/windows/.


When 9.3.5 or Windows x32 nightly build or test build  will be out ?

Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-08 Thread Andrus

I looked back at the previous thread you mentioned (bug #7914) and was
reminded that we never did understand what was going on in that report.
I'm not sure if you are seeing the same thing though.  That user reported
that he was able to see pg_dump's memory consumption bloating well beyond
what it ought to be (I suppose he was watching the process in whatever
Windows' equivalent of ps or top is).  Do you see that?


This occurs in customer computer which I din't observe.

I added --inserts  parameter to pg_dump and ssl=false in postgresql.conf 
file.


After that backup works OK.
Is it OK to use  --inserts parameter ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-08 Thread Andrus

But I wouldn't
call that a reliable fix.  You'd be better off applying the patch.


Where to get this fix in binary form for Windows 32-bit ?

pg_dump.exe uses files below [1]. Is it sufficient to replace libpq.dll file 
?

Where to get its compiled version or how to compile it in Windows ?

[1]
libeay32.dll
libiconv.dll
libintl.dll
libpq.dll
msvcr100.dll
pg_dump.exe
ssleay32.dll
zlib1.dll 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

After upgrading server to Postgres 9.3 in Debian customer cannot create
backups anymore. pg_dump returns
error lost synchronization with server:

C:\myapp\..\pg_dump\pg_dump.exe -ib -Z3 -f C:\mybackup.backup -Fc -h
1.2.3.4 -U user -p 5432 mydb

pg_dump: Dumping the contents of table attachme failed: PQgetCopyData()
failed.
pg_dump: Error message from server: lost synchronization with server: got
message type d, length 5858454
pg_dump: The command was: COPY firma1.attachme (id, idmailbox, attachname,
attachbody, attachtype) TO stdout;


attachme table contains 4487 records
Its  total size is 1016 MB. Most data is contained in one bytea column

I changed

ssl_renegotiation_limit = 512GB

in postgresql.conf but problem persists.

postgres log file does not contain any information about this.

How to fix or diagnose the issue ?

Should I

1. Add --inserts line option to pg_dump. According to (1) it fixes the 
issue.

2. Turn ssl off
3. Change something in VMWare . According to (1) it occurs in VMWare only


Server:

PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.7.2-5) 4.7.2, 64-bit
Debian Linux x64 is running under VMWare, 2 cores

Apache and Mono 3.2.8 with mod_mono MVC4 applicati is also running in this
server


Client:

Windows computer running 9.3 pg_dump.exe over in LAN but external IP address
(1.2.3.4) is used


It worked if server was Widows 2003 server running earlier Postgres 9
without SSL.


Andrus.

(1) https://groups.google.com/forum/#!topic/pgsql.bugs/-bS1Lba3txA 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

Hi!


pg_dump: Error message from server: lost synchronization with server: got
message type d, length 5858454

I think this is probably an out-of-memory situation inside pg_dump, ie
libpq failing to make its input buffer large enough for the incoming row.
It's hard to believe that there's not 6MB available on any modern machine,
so I'm thinking this is an OS-level restriction on how much memory we can
get.  On a Unix machine I'd recommend looking at the ulimit settings
pg_dump is being run under.  Dunno the equivalent for Windows.


Backup computer has modern Windows client OS.
It has GBs of memory and swap file possibility.

Based on my knowledge there is no memory settings in windows which can
restrict 6MB allocation.
On memory shortage Windows shows message like Increasing swap file size.
Customer did'nt report such message.

Dump worked for years without issues when server was 32 bit Windows 2003
server and Postgres and pg_dump were earlier version 9 (but after upgrade
new rows are added to attachme table).

How to create backup copies or diagnose the issue ?
I can change pg_dump execution parameters.
I can install VC++ Express and compile something to add  diagnozing if this
can help.
Maybe this message can improved to include more details about the reason.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

Hi!


Given this is an upgrade, though the specifics were omitted, is there some
possibility of a pg_dump/server version mis-match being the cause.  I could
see where a 32-bit client connecting to a 64bit server could possible
exhibit apparent memory-related issues.


pg_dump is 32-bit version. pg_dump -V returns

pg_dump (PostgreSQL) 9.3.0


Server is x64 :

PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.7.2-5) 4.7.2, 64-bit

Can this cause the issue ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

Hi!


Given this is an upgrade, though the specifics were omitted, is there some
possibility of a pg_dump/server version mis-match being the cause.  I could
see where a 32-bit client connecting to a 64bit server could possible
exhibit apparent memory-related issues.


pg_dump is 32-bit version. pg_dump -V returns

pg_dump (PostgreSQL) 9.3.0


Server is x64 :

PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.7.2-5) 4.7.2, 64-bit

Can this cause the issue ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to distribute budget value to actual rows in Postgresql

2014-02-09 Thread Andrus

Budget table contains jobs with loads:

   create temp table budget (
 job char(20) primary key,
 load numeric(4,1) not null check (load0 )
 );
   insert into budget values ( 'programmer', 3 );
   insert into budget values ( 'analyst', 1.5 );

Actual table contains actual loads by employees:

   create temp table actual (
 job char(20),
 employee char(20),
 load numeric(4,1) not null check (load0 ),
 contractdate date,
 primary key (job, employee)
 );

   insert into actual values ( 'programmer', 'John',  1, '2014-01-01' );
   -- half time programmer:
   insert into actual values ( 'programmer', 'Bill', 0.5, '2014-01-02' );

   insert into actual values ( 'analyst', 'Aldo', 1, '2014-01-03' );
   insert into actual values ( 'analyst', 'Margaret', 1, '2014-01-04' );

Result table should show difference between budget and actual jobs so that 
budget load is

distributed to employees in contract date order.

If budget load is greater than sum of job loads, separate budget line with 
empty employee

should appear.

In data above, 1.5 programmers are missing and 0.5 analysts are more.

Result should be

   JobEmployee  Budget  Actual  Difference

   programmer John  1   1   0
   programmer Bill  0.5 0.5 0
   programmer   1.5 0   1.5
   analystAldo  1   1   0
   analystMargaret  0.5 1   -0.5

How to create such table in modern Postgresql ?
Can rank function with full join used or other idea ?

I tried

   select
coalesce(budget.job, actual.job ) as job,
employee,
budget.load as budget,
coalesce(actual.load,0) as actual,
coalesce(budget.load,0)-coalesce( actual.load,0) as difference
   from budget full join actual on (job)
   order by contractdate

but this does not distribute budget load to employee rows.

I posted this also in

http://stackoverflow.com/questions/21664842/how-to-distribute-budget-value-to-actual-rows-in-postgresql

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-11-09 Thread Andrus

Hi,


Just out of curiosity, what earlier version was that that was able to
run this query quickly?


It was installed in customer site at May 2012 in Windows 2003 server and
latest RTM version of Postgres
x32 in this time was used.

In this year server was upgraded to Windows 2008 x64 server and Postgres 9.3
x64 was used, database was restored from backup copy.
After that this query started to run forever so I assumed that this was
Postgres version issue.

It is probably possible to try to reproduce the issue by restoring it to
earlier version.


Personally I'd try to get rid of the OR, perhaps with
SELECT * FROM toode
WHERE toode in (SELECT toode FROM tempkaive UNION ALL
   SELECT toode FROM tempalgsemu)
You want to end up with a plan that has no SubPlans in it, and in
a quick check this looked promising.


I fixed the issue by using

create index tempkaivetoode on tempkaive(toode);
create index tempalgemutoode on tempalgsemu(toode);

and using exists.

Will your suggestion run faster ? Is it reasonable to switch to use your
suggestion ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus

Query

SELECT * FROM toode
WHERE toode in (SELECT toode FROM tempkaive)
OR toode in (SELECT toode FROM tempalgsemu)

stops working after upgrading to 9.3 RTM in Windows from earlier version.

Task Manager shows that postgres.exe process has constantly 13% CPU usage 
(this is 8 core computer) and private working set memory is 16 MB


PgAdmin shows that this query is running .

toode field type is char(20) and it is toode table primary key.

tempkaive and tempalgsemu are temporary tables created eralier this 
transaction. They do not have indexes.

toode is real table which has 509873 records .
Probably tempkaive temp table size is bigger that toode table and 
templalgemu temp table size is smaller than in toode.


How to fix this or find the reason ?
How to rewrite the query so that it works ?

analyze command was executed but problem persists.
I tested it running Postgres 9.3 RTM in 32 bin Windows 7 and  64 bit Windows 
2008 R2 servers.

In both cases same problem occurs.
Only single user is using database and only this query is running.


Locks window shows:

7840toy53749admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy53652admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy54605admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy54608admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy49799admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840admin7/133757/13375ExclusiveLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
7840toy53750admin7/13375AccessShareLockYes 
2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode 
FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)


Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus

Hi,

Thank you.


Generally, WHERE IN (SELECT) should be rewritten as WHERE EXISTS
(SELECT):
SELECT * FROM toode o
WHERE EXISTS (SELECT toode FROM tempkaive i WHERE o.toode = i.toode)
   OR EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode)


I re-wrote it. It now hangs in this line

SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode 
= i.toode)


I wait 18 minutes but query is still running.

Maybe it will take extremely long time.
How to make it work  ?

toode table structure is below. It contains 509873 records .
tempkaive and tempalgsemu are big temporary tables created earlier this
transaction. They do not have indexes and have lot of records.

Andrus.

CREATE TABLE firma1.toode
(
 grupp character(1),
 toode character(20) NOT NULL,
 ribakood character(20),
 ribakood2 character(20),
 ribakood3 character(20),
 nimetus character(50),
 yhik character(6),
 myygikood character(4),
 tykke numeric(9,2),
 liik character(10),
 kontonr character(10),
 engnimetus character(50),
 rusnimetus character(50),
 finnimetus character(50),
 lvlnimetus character(50),
 markused text,
 myygihind numeric(15,5),
 jaehind numeric(15,2),
 katteprots numeric(6,2),
 paritoluri character(2),
 ostuhind numeric(15,5),
 valmyygih numeric(15,5),
 valraha character(3),
 ovalraha character(3),
 aktsiis numeric(10,5),
 kogpak numeric(9,4) NOT NULL DEFAULT 0,
 soodkogus numeric(8,1),
 vaikkogus numeric(12,4),
 hinne numeric(8,2),
 yhikuteise numeric(9,4),
 norm numeric(8,4),
 soetaeg date,
 soetarve character(25),
 algmaksumu numeric(12,2),
 kasutaja character(12),
 kulum character(10),
 kulukonto character(10),
 oper character(3),
 objekt1 character(10),
 objekt2 character(10),
 objekt3 character(10),
 objekt4 character(10),
 objekt5 character(10),
 objekt6 character(10),
 objekt7 character(10),
 objekt8 character(10),
 objekt9 character(10),
 parimenne date,
 asukoht character(25),
 minkogus numeric(12,4),
 masin character(5),
 ryhm character(10),
 klass character(5),
 kaubasumma text,
 tasusumma text,
 pangateen ebool,
 analoog character(20),
 taara character(20),
 taara2 character(20),
 taarakaal numeric(9,5),
 taara2kaal numeric(9,5),
 hankija character(12),
 hinnak character(5),
 eelminekuu ebool,
 distribute ebool,
 plaanhind numeric(15,5),
 timestamp character(14) NOT NULL DEFAULT to_char(now(), 
'MMDDHH24MISS'::text),
 atimestamp character(14) NOT NULL DEFAULT to_char(now(), 
'MMDDHH24MISS'::text),

 username character(10),
 changedby character(10),
 kgasuvi numeric(2,0),
 ktasuvi numeric(2,0),
 kgatalv numeric(2,0),
 ktatalv numeric(2,0),
 kylmik numeric(2,0),
 tkmkoef numeric(3,1),
 paak numeric(4,0),
 kassakeeld ebool,
 kaalukaup ebool,
 saadakaalu ebool,
 sailivusae numeric(2,0),
 kaubakood character(10),
 netomass numeric(12,4),
 seisund character(1),
 tootjakood character(40),
 klassif3 numeric(7,0),
 prots1 numeric(6,2),
 prots2 numeric(6,2),
 prots3 numeric(6,2),
 ale1 numeric(8,2),
 ale2 numeric(8,2),
 ale3 numeric(8,2),
 tootja character(10),
 soomes numeric(12,4),
 originaal character(20),
 eekjaehind numeric(15,2),
 amordipiir numeric(12,2),
 pant character(20),
 hulgihind numeric(12,2),
 transportw ebool,
 tykke2 numeric(9,2),
 tootjaviit character(40),
 CONSTRAINT toode_pkey PRIMARY KEY (toode),
 CONSTRAINT toode_changedby_fkey FOREIGN KEY (changedby)
 REFERENCES kasutaja (kasutaja) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kasutaja_fkey FOREIGN KEY (kasutaja)
 REFERENCES firma1.klient (kood) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kaubakood_fkey FOREIGN KEY (kaubakood)
 REFERENCES nomenkla (kood) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kontonr_fkey FOREIGN KEY (kontonr)
 REFERENCES firma1.konto (kontonr) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kulukonto_fkey FOREIGN KEY (kulukonto)
 REFERENCES firma1.konto (kontonr) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_kulum_fkey FOREIGN KEY (kulum)
 REFERENCES firma1.konto (kontonr) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_liik_fkey FOREIGN KEY (liik)
 REFERENCES firma1.artliik (liik) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED,
 CONSTRAINT toode_myygikood_fkey FOREIGN KEY (myygikood)
 REFERENCES firma1.myygikoo (myygikood) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT toode_objekt1_fkey FOREIGN KEY (objekt1)
 REFERENCES firma1.yksus1 (yksus) MATCH SIMPLE
 ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
 CONSTRAINT

Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi,

thank you.
Could you please post EXPLAIN for that query? 
As recommend I changed query to use exists :

SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR
  EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode)

It still hangs in same way. This query explain is:

Seq Scan on toode o  (cost=0.00..172913763.23 rows=382319 width=1681)
  Filter: ((SubPlan 1) OR (alternatives: SubPlan 2 or hashed SubPlan 3))
  SubPlan 1
-  Seq Scan on tempkaive i  (cost=0.00..4566.52 rows=14 width=0)
  Filter: (o.toode = toode)
  SubPlan 2
-  Seq Scan on tempalgsemu i_1  (cost=0.00..348.98 rows=27 width=0)
  Filter: (o.toode = toode)
  SubPlan 3
-  Seq Scan on tempalgsemu i_2  (cost=0.00..335.58 rows=5358 width=84)


 How 'fat' are the temporary tables - just a couple of columns or really wide?

tempalgsemu has 14 columns
tempkaive has 31 columns

structures are below. Too structure was posted in separate letter.

Andrus.

tempalgsemu :

Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  ID  Integer 4Yes
2  LAONR   Numeric 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  OSAKCharacter  10Yes
6  TOODE   Character  20Yes
7  PARTII  Character  15Yes
8  KOGUS   Numeric14  4 Yes
9  HINDNumeric17  5 Yes
   10  KULUM   Numeric17  5 Yes
   11  TEGKOGUSNumeric14  4 Yes
   12  STKUUPAEV   Date8Yes
   13  KLIENT  Character  12Yes
   14  MASIN   Character   5Yes
** Total **  156

 
tempkaive



Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  DOKTYYP Character   1Yes
2  DOKUMNR Integer 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  RAHACharacter   3Yes
6  EXCHRATENumeric16  8 Yes
7  KLIENT  Character  12Yes
8  ID  Integer 4Yes
9  TOODE   Character  20Yes
   10  PARTII  Character  15Yes
   11  KULUPARTII  Character  15Yes
   12  KOGPAK  Numeric11  4 Yes
   13  KOGUS   Numeric14  4 Yes
   14  HINDNumeric17  5 Yes
   15  MYYGIKOOD   Character   4Yes
   16  YHIKCharacter   6Yes
   17  NIMETUS Character  50Yes
   18  HINNAK  Character   5Yes
   19  TKOGUS  Numeric20  6 Yes
   20  UKOGUS  Numeric20  6 Yes
   21  KUSTPARTII  Character  15Yes
   22  KAUBASUMMA  Numeric17  5 Yes
   23  KULUOBJEKT  Character  10Yes
   24  FIFOEXPENS  Logical 1Yes
   25  KULUM   Numeric17  5 Yes
   26  SKAUBASUMM  Numeric17  5 Yes
   27  ST  Numeric 3Yes
   28  VM  Numeric 3Yes
   29  VKAUBASUMM  Numeric20  6 Yes
   30  YKSUS   Character  10Yes
   31  SIHTYKSUS   Character  10Yes
** Total **  378



On Mon, Sep 23, 2013 at 7:08 PM, Andrus kobrule...@hot.ee wrote

Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
I fixed the issue by creating indexes for temporary tables before running query:

create index on tempalgsemu(toode);
create index on temphetkes(toode);
SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR
  EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode);

Is this best fix ?

Andrus.


From: Andrus 
Sent: Monday, September 23, 2013 6:06 PM
To: Jayadevan M 
Cc: pgsql-general@postgresql.org 
Subject: Re: [GENERAL] Query runs forever after upgrading to 9.3

Hi,

thank you.
Could you please post EXPLAIN for that query? 
As recommend I changed query to use exists :

SELECT * FROM toode o WHERE  exists (SELECT toode FROM tempkaive i where 
o.toode = i.toode ) OR
  EXISTS (SELECT toode FROM tempalgsemu i WHERE o.toode = i.toode)

It still hangs in same way. This query explain is:

Seq Scan on toode o  (cost=0.00..172913763.23 rows=382319 width=1681)
  Filter: ((SubPlan 1) OR (alternatives: SubPlan 2 or hashed SubPlan 3))
  SubPlan 1
-  Seq Scan on tempkaive i  (cost=0.00..4566.52 rows=14 width=0)
  Filter: (o.toode = toode)
  SubPlan 2
-  Seq Scan on tempalgsemu i_1  (cost=0.00..348.98 rows=27 width=0)
  Filter: (o.toode = toode)
  SubPlan 3
-  Seq Scan on tempalgsemu i_2  (cost=0.00..335.58 rows=5358 width=84)


 How 'fat' are the temporary tables - just a couple of columns or really wide?

tempalgsemu has 14 columns
tempkaive has 31 columns

structures are below. Too structure was posted in separate letter.

Andrus.

tempalgsemu :

Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  ID  Integer 4Yes
2  LAONR   Numeric 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  OSAKCharacter  10Yes
6  TOODE   Character  20Yes
7  PARTII  Character  15Yes
8  KOGUS   Numeric14  4 Yes
9  HINDNumeric17  5 Yes
   10  KULUM   Numeric17  5 Yes
   11  TEGKOGUSNumeric14  4 Yes
   12  STKUUPAEV   Date8Yes
   13  KLIENT  Character  12Yes
   14  MASIN   Character   5Yes
** Total **  156

 
tempkaive



Field  Field Name  TypeWidthDec   Index   Collate Nulls 
   NextStep
1  DOKTYYP Character   1Yes
2  DOKUMNR Integer 4Yes
3  KUUPAEV Date8Yes
4  KELLAAEGCharacter   5Yes
5  RAHACharacter   3Yes
6  EXCHRATENumeric16  8 Yes
7  KLIENT  Character  12Yes
8  ID  Integer 4Yes
9  TOODE   Character  20Yes
   10  PARTII  Character  15Yes
   11  KULUPARTII  Character  15Yes
   12  KOGPAK  Numeric11  4 Yes
   13  KOGUS   Numeric14  4 Yes
   14  HINDNumeric17  5 Yes
   15  MYYGIKOOD   Character   4Yes
   16  YHIKCharacter   6Yes
   17  NIMETUS Character  50Yes
   18  HINNAK  Character   5Yes
   19  TKOGUS  Numeric20  6 Yes
   20  UKOGUS  Numeric20  6 Yes
   21  KUSTPARTII  Character  15Yes
   22  KAUBASUMMA  Numeric17  5 Yes
   23  KULUOBJEKT  Character  10Yes
   24  FIFOEXPENS  Logical 1Yes
   25  KULUM   Numeric17  5 Yes
   26  SKAUBASUMM  Numeric17  5

Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi,
21  shared_buffers 2400MB configuration file
What are effective_cache_size and work_mem set to? The defaults? 

Yes. 

They are good candidates to be increased. effective_cache_size could be set to 
(for example) 10GB, depending on how much memory gets consumed by the other 
application(s) running on that server.

There are 10 human users and one web service user. Windows Task Manager cached 
value shows 10 GB 
in evening when nobody is working in server.

I changed those to

effective_cache_size= 10GB
work_mem = 400MB

Hope that this is OK.

The EXPLAIN ANALYZE plan of your query will show if work_mem needs to be 
increased, as there will be a line saying something like External merge: 
disk sort (or something like that, can't recall the exact message off the top 
of my head).


After adding indexes log contains

LOG:  duration: 11045.000 ms  statement: create index on 
tempkaive(toode);create index on tempalgsemu(toode);SELECT * FROM toode o WHERE 
 exists (SELECT toode FROM tempkaive i where o.toode=i.toode) OR EXISTS (SELECT 
toode FROM tempalgsemu i WHERE o.toode = i.toode)


  In development computer from where explain was posted and problem with copy 
of database also occurs:

  shared_buffers;128MB;configuration file



 You likely want to bump that up closer to 1GB.

I changed it. Thank you very much.

Andrus.

Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi!
Could you also post the results of the following query?
SELECT name, current_setting(name), source
FROM pg_settings
WHERE source NOT IN ('default', 'override');

In real server where problem is:

1  DateStyle  ISO, DMY   session   
2  default_text_search_config pg_catalog.simple  configuration file
3  extra_float_digits 2  session   
4  lc_messagesEstonian_Estonia.1257  configuration file
5  lc_monetaryEstonian_Estonia.1257  configuration file
6  lc_numeric Estonian_Estonia.1257  configuration file
7  lc_timeEstonian_Estonia.1257  configuration file
8  listen_addresses   *  configuration file
9  log_destinationstderr configuration file
10  log_line_prefix%t %u %d   configuration file
11  log_lock_waits on configuration file
12  log_min_duration_statement 10sconfiguration file
13  log_min_error_statementwarningconfiguration file
14  log_temp_files 2000kB configuration file
15  log_timezone   Europe/Helsinkiconfiguration file
16  logging_collector  on configuration file
17  max_connections100configuration file
18  max_stack_depth2MBenvironment variable  
19  port   5432   configuration file
20  search_pathfirma1, public session   
21  shared_buffers 2400MB configuration file
22  TimeZone   Europe/Helsinkiconfiguration file

In development computer from where explain was posted and problem with copy of 
database also occurs:
 
application_name;pgAdmin III - Query Tool;client
bytea_output;escape;session
client_encoding;UNICODE;session
client_min_messages;notice;session
DateStyle;ISO, DMY;session
default_text_search_config;pg_catalog.simple;configuration file
lc_messages;Estonian_Estonia.1257;configuration file
lc_monetary;Estonian_Estonia.1257;configuration file
lc_numeric;Estonian_Estonia.1257;configuration file
lc_time;Estonian_Estonia.1257;configuration file
listen_addresses;*;configuration file
log_destination;stderr;configuration file
log_line_prefix;%t ;configuration file
log_timezone;Europe/Helsinki;configuration file
logging_collector;on;configuration file
max_connections;100;configuration file
max_stack_depth;2MB;environment variable
port;5432;configuration file
shared_buffers;128MB;configuration file
TimeZone;Europe/Helsinki;configuration file

  Also, what is the total memory in the server?

In devel computer where tests are performed, 4 GB
Real server  has 16 GB RAM
Real server  is for  Postgres for this database and ASP.NET MVC3 application 
which uses this same database from postgres.
Can settings in real server changed to increase perfomance ?


Andrus.


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Andrus
Hi,
For cluster-wide setting you will probably want to drop that significantly -- 
start lower, as in somewhere around 10MB and work up from there as necessary. 
For the queries you are finding slow (the reason for these emails) you can set 
work_mem specifically for the session.

Eg.

set work_mem to '400MB';

run your query

reset work_mem; 

I changed it to 10MB. 
The problamatic commands runs now less than 10 seconds.

http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

wrote that max reasonable value in Windows for shared_buffers is 512MB

Is my setting shared_buffers= 2400MB reasonable in Windows ?

Andrus.

[GENERAL] The signature of postgresql-9.3.0-1-windows.exe is corrupt or invalid

2013-09-13 Thread Andrus
9.3 x32 RTM was downloaded from 


http://get.enterprisedb.com/postgresql/postgresql-9.3.0-1-windows.exe

in Windows 7 x64 using IE 10

After downloading IE message

The signature of postgresql-9.3.0-1-windows.exe  is corrupt or invalid

appears and IE does not allow to run it.

How to install 32-bit 9.3 in Windows 7 x64 ?


Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fixing or diagnosing Canceled on identification as a pivot, during write

2012-09-13 Thread Andrus
After switching to PostgreSql 9.1 serializable transaction level for all 
transactions during posting single document errors

40001:ERROR: could not serialize access due to read/write dependencies among 
transactions
Reason code: Canceled on identification as a pivot, during write.;

started to appear in log file.

Code which causes them is below.

Code involves only single document (in this example id 95162) . Is document is 
probably not accesed by others.
How to fix or diagnose this error ?

tasutud1 is temporary table created in transaction earlier:

CREATE TEMP TABLE tasutud1 (dokumnr INTEGER, tasutud NUMERIC(1)) ON COMMIT DROP

Other tables are permanent tables updated and accessed by 7 users concurrently.


Code where exception occurs is :

CREATE TEMP TABLE ids(dokumnr INT) ON COMMIT DROP;
INSERT INTO ids VALUES(95162);
analyze ids;UPDATE DOK set
  kinnitatud = TRUE,
  doksumma=CASE WHEN TRUE THEN COALESCE(doksumma.doksumma,0) ELSE 0 END,
  tasumata =CASE WHEN TRUE AND dok.krdokumnr IS NULL and
  dok.doktyyp IN ('G','O') THEN
   doksumma.doksumma-COALESCE(doksumma.tasutud,0) ELSE 0 END
 FROM
(SELECT 
ids.dokumnr,
SUM( CASE WHEN rid.toode is NULL OR LENGTH(RTRIM(rid.toode))2 OR 
toode.grupp'S' or
   (STRPOS(toode.klass,'T')!=0 AND STRPOS(toode.klass,'E')=0)
  THEN
ROUND(COALESCE(rid.hind,0)*CASE WHEN COALESCE(rid.kogus,0)=0 THEN 1 
ELSE rid.kogus END*CASE WHEN COALESCE(rid.kogpak,0)=0 THEN 1 ELSE rid.kogpak 
END,2) ELSE 0 END ) AS doksumma,
max(tasutud1.tasutud) as tasutud
  FROM ids 
  JOIN dok USING(dokumnr)
  JOIN rid USING(dokumnr)
  LEFT JOIN toode USING(toode)
  LEFT JOIN tasutud1 ON tasutud1.dokumnr=ids.dokumnr
WHERE not rid.fifoexpens and not rid.calculrow
  and (not dok.inventuur or rid.kogus0 )
GROUP BY 1
) doksumma 
left join bilkaib on bilkaib.dokumnr=doksumma.dokumnr and bilkaib.alusdok='LO'
WHERE dok.dokumnr=doksumma.dokumnr

Should this code split into multiple commands to find which part causes 
exception or other idea ?

Andrus.

Re: [GENERAL] How to raise error from PostgreSql SQL statement if some condition is met

2012-08-12 Thread Andrus

Than you very much.
It worked.

I tried to extend it to pass message parameters. Tried code below but got 
syntax error. How to pass message parameters ?


Andrus.

CREATE OR REPLACE FUNCTION RaiseException(text, variadic )
 RETURNS void LANGUAGE plpgsql AS
$BODY$
BEGIN
  RAISE EXCEPTION  $1, $2;
END;
$BODY$;

SELECT RaiseException('Exception Param1=% Param2=%', 'textvalue', 2 ); 



-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to raise error from PostgreSql SQL statement if some condition is met

2012-08-12 Thread Andrus

... RaiseException(text, variadic text[])
..



VARIADIC is keyword, not datatype


Thank you.

I tried code below but got error shown in comment.
No idea what I'm doing wrong.

Andrus.


CREATE OR REPLACE FUNCTION RaiseException(text, variadic text[] )
 RETURNS void LANGUAGE plpgsql AS
$BODY$
BEGIN
 -- ERROR:  syntax error at or near $1
 RAISE EXCEPTION  $1, $2;
END;
$BODY$;

SELECT RaiseException('Exception Param1=% Param2=%', 'textvalue', '2' ); 


-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to raise error from PostgreSql SQL statement if some condition is met

2012-08-11 Thread Andrus
I’m looking for a way to raise error from sql select if some condition is met.
Tried code below to got error shown in comment.
How to fix ?

Andrus

CREATE OR REPLACE FUNCTION exec(text)
  RETURNS text AS
$BODY$ 
BEGIN 
  EXECUTE $1; 
  RETURN $1; 
END; 
$BODY$
  LANGUAGE plpgsql VOLATILE;
  
-- ERROR:  syntax error at or near raise
-- LINE 1: raise 'test' 

select exec('raise ''test'' ') where true -- in real application true is 
replaced by some condition 

[GENERAL] how to return results from code block

2012-06-30 Thread Andrus
How to return single row or results from code block executed using ADO.NET 
ExecuteQuery() method.
I tried 

DO $$
declare 
  i integer :=0;

begin
select i+1 as res1, i+2 as res2;
END$$;

but got error:

ERROR:  query has no destination for result data

How to return single row result from code pgsql  code block ?

Andrus.

[GENERAL] How to insert record only if primary key does not exist

2012-06-30 Thread Andrus
Table is defined as

CREATE TABLE firma1.klient (
  kood character(12) NOT NULL DEFAULT 
nextval('firma1.klient_kood_seq'::regclass),

);

How to insert record to this table only if primary key does not exist ?
I tried code below but got strange error as shown in log file.
Why this error occurs ?

Andrus.

2012-06-30 15:51:04 EEST ERROR:  duplicate key value violates unique constraint 
klient_pkey
2012-06-30 15:51:04 EEST DETAIL:  Key (kood)=(20037   ) already exists.
2012-06-30 15:51:04 EEST STATEMENT:  insert into klient (
kood,
nimi,
tanav,
piirkond,
postiindek
)
select ((E'20037')), ((E'Statoil Fuel  Retail')), ((E'')), ((E'10148 
nnn')),((E''))
from klient 
where not exists (select 1 from klient where kood =((E'20037')))

Re: [GENERAL] How to perform full text search

2012-03-18 Thread Andrus
Parse the entry string into words (aka tokens) and assemble with the and 
operator. E.g. 'red cat' becomes 'red  cat'. Then add vector; more info 
in articles I provide links to later in this note.
WHERE to_tsvector ( productname || ' ' || productdescription ) @@ 
to_tsquery ( 'red  cat' )


Since there were no responces for a while, I went with another solution.
Splitted search string to words like you but converted query to

select
+case when productname ilike '%red%'  then 2 else 0 end
+case when productdescription ilike '%red%'  then 1 else 0 end
+case when productname ilike '%cat%'  then 1.7 else 0 end
+case when productdescription ilike '%cat%'  then 0.7 else 0 end
from products
order by 1 desc
limit 100

This allows to define relevance.
Is my solution reasonable ?

Andrus 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to perform full text search

2012-03-15 Thread Andrus
User can enter any number of words as search string.
In shopping cart the following query is used to find products,
eq. if red cat is entered:

select * from products
where  productname ilike '%'||'red cat'||'%' 
  or productdescription ilike '%'||'red cat'||'%' 
limit 100

This does not find products like red or black cat.
How to change this query so that it returns 100 best matches for for given 
search string?
I read documentaton about full text search but havent found step by step 
solution for this.

Using PostgreSQL 8.2.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2 
20061115 (prerelease) (Debian 4.1.1-21)

Should I install some contribs to 8.2 or is it better to upgrade server ?

Where to find step by step instructions making this work ?

Andrus.






[GENERAL] Content management system to build web site with PostgreSql, should it be WordPress

2012-03-03 Thread Andrus
I’m looking for a way to build web site which uses PostgreSql to store web 
pages and allow users to modify them.

Admin user should able to create and changed pages using html editor from 
browser.
Site runs in Debian Squeeze x64 VPS using Apache. There are  Mono 2.8 and 
PostgreSql 9.1 applications running in this VPS.

There is no PHP installed but it can probably installed if software requires 
this.

CMS should provide nice dark theme for web site so that web designer is not 
required for this.

WordPress PostgreSql plugin page 
http://wordpress.org/extend/plugins/postgresql-for-wordpress/
states this this plugin is not compatible with latest wordpress.
Also this plugin is not updated for a while.
Joomla! does not run in Postgres DBMS.
Drupal 7 seems to support it but comparing to WordPress Drupal usage is smaller.

There are mono ASP.NET applications running in this site so using some ASP.NET 
CMS seems best.
Latest Orchad does not run in Mono and PostgreSQL.

Which software is best for  PostgreSql ?

Andrus.


[GENERAL] How to recover data from cluster

2012-02-16 Thread Andrus
PostgreSql 8.4 in windows crashes. After that Windows disk repairing was used 
to repair hard drive.
After that Data/base directory from crashed server contains lot of files, all 
files are readable.

PostgreSql 8.4 was reinstalled in new server and data directory was set to 
directory from crashed server.

pgAdmin shows that there is only public schema without any tables.
Actually there was two schemas with lot of tables.

How to fix this so old data is accessible ?

Andrus.

[GENERAL] How to create crosstab with 3 values in every crosstab column

2012-02-06 Thread Andrus
I'm looking for a way to generate cross tab with 3 columns for every store 
where 
number of stores in not hard coded.
Every store info should contain 3 columns:

turnover
budget
budget percent (=turnover/budget*100)

Result should look like:

Acc   st1turnover   st1budget st1percent  ...   stNturnover  st1budget 
stNpercent
311   100   20050 200  ...300   67
312   400   50080 600  ...700   86  
  
...

I tried crosstab from tablefunc but it allows only single value in every 
crosstabled column. 

How to show 3 values in every column: sales, budget and percent in this order?

Tables are:

create table sales (
  account char(10), 
  store char(10),
  sales  numeric(12,2) );
 
insert into sales values 
('311','ST1',100)... ('311','STN',200)
('312','ST1',400)... ('312','STN',600);

create table budget (
  account char(10), 
  store char(10),
  budget numeric(12,2) );
 
insert into budger values 
('311','ST1',200)... ('311','STN',300)
('312','ST1',500)... ('312','STN',700);
 

Some account and store values may be missing from tables.

Andrus.

Re: [GENERAL] How to create crosstab with 3 values in every crosstab column

2012-02-06 Thread Andrus

Thank you.


first calculate how much columns we should return...No of stores in in your
Sales table (multiply 3 in your case)...
(and based on that - build the grid in UI)
then with knowing that - we select full table  with simple 3 functions:
CalcSales(coount, Store), getBudget(account, Store)
and then build dynamic query with those 3 functions many times as we have
stores as columns...


My goal is to open result En excel.
Instead of building dynamic query isn't it reasonable to create csv file
directly from code without using crosstab
but creating it manually ?
It looks that crosstab does not have any advantages instead on manual
crosstab creation ?

Another possibility is to merge those 3 columns into single column and
crosstab it .
After that create csv file by splitting single column into 3 columns.

Can some postgres array function or something like used to split 1 column
into 3 columns in result without
building query string manually ?

I posted this also in

http://stackoverflow.com/questions/9162497/how-to-generate-crosstab-with-3-columns-for-every-store-in-postgresql

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create crosstab with 3 values in every crosstab column

2012-02-06 Thread Andrus

Thank you.

the point is - it is not possible to get unknown no of columns in 1 SQL 
query...

i.e.
Account, Store, Amount
100, St1, 1000.00
100, St2, 2000.00
to get:
Acount,St1 ,  St2
100, 1000.00  2000.00
to get that in your query... St1 and St2 - must be hardcoded... (is there 1 
column per Store, or 3 columns per store it is less important...)

if it St1 and St2 are hardcoded in query, even if in table is:
Account, Store, Amount
100, St1, 1000.00
100, St2, 2000.00
100, St3, 3000.00
We would get the same result...actually if we want St3 we need to change 
our query and add St3 in it...

that is the reason why we use Dynamic SQL to build the query...
when you build your Dynamic SQL query... you could use COPY (dynamicQuery) 
TO CSV file...

or instead of to build dynamic query, you can export directly to file...


I din't knwo this. This seems very serious limitation which makes crosstab 
useless .

I tried

create temp table sales (
 account char(10),
 store char(10),
 sales  numeric(12,2) ) on commit drop;

insert into sales values
('311','ST1',100), ('311','STN',200),
('312','ST1',400), ('312','STN',600);

select * from
crosstab('select * from sales', 'select distinct store from sales' ) x

and got error

ERROR:  a column definition list is required for functions returning 
record


Can we use something like

select * from
 dynamicwrapper( crosstab('select * from sales', 'select distinct store 
from sales' ))  x


Where to find generic dynamicwrapper stored procedure which fixes this by 
building dynamic query itself or other idea ?


Andrus.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-23 Thread Andrus

Would seem to be one of two things:
1) The initdb is being done before the locale is changed.
or
2) The installation is overriding the locale, though I find this one less
possible than 1.


Thank you.
How to re-configure Postresql db cluster so that uses Debian default system 
locale?


Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-23 Thread Andrus

If you don't want to re-initdb, you could just update the datctype and
datcollate columns of pg_database for template0.


Thank you.
where to find sql update statement which does this ?
Is

update pg_database set datctype ='et_EE.UTF-8',   datcollate ='et_EE.UTF-8'

best for this ?

template0 is read-only, how to connect to and update it using pgAdmin ?

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-22 Thread Andrus

Adrian and Bèrto,

Thank you very much for quick and excellent replies. Locale names are 
different in every Linux distro.
Postgresql does not provide any way to retrieve them (ssh access is reqired 
to retireve them using locale -a)


Thus suggection using hard coded locale names is not possible.

How to force server to use et_EE.UTF-8 as default locale without hard coding 
it into application?


How to force command

CREATE DATABASE yourdbname TEMPLATE = template0

to use et_EE.UTF-8  locale by default ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-22 Thread Andrus

Is it reasonable to use commands

export LC_COLLATE='et_EE.UTF-8'
export LC_CTYPE='et_EE.UTF-8'
apt-get -t squeeze-backports install postgresql-9.1 postgresql-common 
postgresql-contrib


Will this force et_EE.UTF-8 locale ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-22 Thread Andrus
http://www.postgresql.org/docs/8.1/interactive/manage-ag-templatedbs.html 
(see last comment), I haven't checked it myself as I usually have a mix of 
locales in my installs (often even in a single db) and never really used 
any default, but it should still work.


using template1 requires exclusive access to cluster.
I cannot force all users to log out while creating new db.
So using template1 is not possible.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-22 Thread Andrus

What application?


My application.


Well you would use template0 as the TEMPLATE only if you wanted to CREATE a

database with different collation than that in template1(the default
template for

the CREATE DATABASE command). So the question then is, why is the database
cluster being created with a collation of en_US.UTF-8 when the locale is
supposed to have been set to et_EE.UTF-8?
First are you sure that dpkg-reconfigure locales  is actually resetting the

locale?

dpkg-reconfigure locales
Generating locales (this might take a while)...
 en_US.UTF-8... done
 et_EE.UTF-8... done
Generation complete.
*** update-locale: Warning: LANGUAGE (en_US:en) is not compatible with
LANG (et_EE.UTF-8). Disabling it.


Second when you connect to the cluster with psql what does \l show for
encoding
and collation?


psql postgres
psql (9.1.1)
Type help for help.

postgres= \l
  List of databases
  Name|   Owner| Encoding |   Collate   |Ctype|   Access
privileges
---++--+-+-+---

postgres  | postgres   | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres   | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
+
  ||  | | |
postgres=CTc/postgres
template1 | postgres   | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
+
  ||  | | |
postgres=CTc/postgres

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-22 Thread Andrus

Actually the interesting part would be what  locale


locale
LANG=et_EE.UTF-8
LC_CTYPE=et_EE.UTF-8
LC_NUMERIC=et_EE.UTF-8
LC_TIME=et_EE.UTF-8
LC_COLLATE=et_EE.UTF-8
LC_MONETARY=et_EE.UTF-8
LC_MESSAGES=et_EE.UTF-8
LC_PAPER=et_EE.UTF-8
LC_NAME=et_EE.UTF-8
LC_ADDRESS=et_EE.UTF-8
LC_TELEPHONE=et_EE.UTF-8
LC_MEASUREMENT=et_EE.UTF-8
LC_IDENTIFICATION=et_EE.UTF-8
LC_ALL=



and  locale -a show after  the above:)?


locale -a
C
en_US.utf8
et_EE.utf8
POSIX

Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to create database with default system locale is set to et_EE.UTF-8

2011-12-21 Thread Andrus
In fresh Debian installation default system locale is set to et_EE.UTF-8 using

dpkg-reconfigure locales 

Postgres is installed using

apt-get update
apt-get -t squeeze-backports install postgresql-9.1 postgresql-common 
postgresql-contrib

Trying to create database with et_EE.UTF-8 collation and character type returns 
error

---
pgAdmin III
---
An error has occurred:

ERROR:  new collation (et_EE.UTF-8) is incompatible with the collation of the 
template database (en_US.UTF-8)
HINT:  Use the same collation as in the template database, or use template0 as 
template.

How to create new database with et_EE.UTF-8 collation and character type ?
How to force Postgres installation to create template1 with et_EE.UTF-8 
collation and character type ?

Andrus.


[GENERAL] How to get normalized data from tekst column

2011-11-24 Thread Andrus
Project table contains salesman names and percents as shown
below. Single comment column contains  1-2 salesman names and commissions.
How select normalized data from this table ?

Andrus.

CREATE TABLE project (
id char(10) primary key,
comment char(254)
);

insert into test values ('2010-12', 'Aavo 19%, Peedu 15%');
insert into test values ('2010-22', 'Lauri-21%,Peedu 15%');
insert into test values ('2011-33', 'Taavi 21%');

How to create select statement in Postgresql 8.1.23 which 
selects this data as normalized table like

CREATE TABLE commission (
projectid char(10),
salesman  char(5),
commission n(2) )

result using data above should be


'2010-12', 'Aavo', 19
'2010-12', 'Peedu', 15
'2010-22', 'Lauri', 21
'2010-22', 'Peedu', 15
'2011-33', 'Taavi', 21

Re: [GENERAL] How to get normalized data from tekst column

2011-11-24 Thread Andrus
David,
Regular Expressions are your friend here.  If you do not know them you should 
learn them; though if you ask nicely someone may just provide you the solution 
you need.
Split-to-array and unnest may work as well.

Thank you very much. I don’t know regexps.
Can you provide example, please for 8.1. Or maybe CASE WHEN and substring 
testing can also used.

Andrus.

Re: [GENERAL] How to get normalized data from tekst column

2011-11-24 Thread Andrus Moor

Harald,

Thank you.


The query



SELECT id, a[1] AS name, a[2] AS percent
FROM ( SELECT id, regexp_split_to_array(regexp_split_to_table(comment, ', 
*'), '\W+') AS a

 FROM project ) AS dummy



should work un every halfway recent PostgreSQL version - dunno about 8.1.



I tried it but got error in 8.1:

ERROR:  function regexp_split_to_table(text, unknown) does not exist
HINT:  No function matches the given name and argument types. You may need 
to add explicit type casts.


how to get data in 8.1 ?

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get normalized data from tekst column

2011-11-24 Thread Andrus

David,

Thank you.

Whether you can write a sufficient function with 8.1 features I do not 
know.
You main issue is you need to be able to output multiple records from a 
single input record and doing so before 8.4 seems problematic since 
functions like substring cannot do that.


comment field contain 0.. 2  salemans, no more:

'Aavo 19%, Peedu 15%'
'Lauri-21%,Peedu 15%'
'Taavi 21%'

Maybe in 8.1 it is possible to write 2 select statements. First will extract 
first item and second select

will extract second item if second item exists ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-20 Thread Andrus

Scott,

Thank you.


Just add them to the bottom then run
sudo sysctl -p
to make them take effect.


It seems that if this value is greater than RAM, linux kerner silently uses 
max possible value.
Linuxes are used mostly to run PostgreSql only but ram may vary depending on 
virtual maschine configuration at runtime.
Is it reasonable to use large value, eq.  8GB as SHMMAX in sysctl.conf file 
always ?
In this case root and SSH access to server is not required if RAM amount 
changes. This simplifies server administration. Only postgresql.conf needs 
changed which can be done from 5432 port using pgAdmin.



btw, the standard way to control rc stuff is update-rc.d Odd name but
it's pretty easy, just look up the man page.


Debian seems to require update-rc.d and Centos chkconfig
How to use single command for every distro ?

/etc/init.d/postgresql start works in all distros. Adding to postgresql to 
startup requires different commands in different distros ?!


Andrus 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-19 Thread Andrus

To check whether this is the case, look in the appropriate /etc/rc*.d

directory for the runlevel you're using, and see if there is an S
symlink to /etc/init.d/postgresql-9.1 (or something like that) in there.

Thank you.
There are S18postgresql symlinks in may rc?.d directories.
They seems to be version independent so hopefully server is started on boot.

Should shared_buffers and other values changed from installations defaults 
in postgresql.conf file to increase performance ?
How to run enterprice db tuner or other utility to chenge them automatically 
?

Virtual machine seems to have 2 GB of ram.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-19 Thread Andrus

Adrian,


For tuning tips a good start is:
http://wiki.postgresql.org/wiki/Performance_Optimization
In particular:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
For books take a look at:
http://www.postgresql.org/docs/books/


thank you very much. Reading them requires lot of time. I'm looking for 
quick optimization for 2 GB RAM.

postgresql.conf contains

shared_buffers = 24MB   # min 128kB
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0  # zero disables the feature
#work_mem = 1MB # min 64kB
#maintenance_work_mem = 16MB# min 1MB
#max_stack_depth = 2MB  # min 100kB

Probably only few settings like shared_buffers needs adjusting for 2 gb

In windows I use enterpicedb tuning wizard which does this automatically.
Maybe something works in Debian also.
Or this there quick guide how to change most important settings.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-19 Thread Andrus

From here(look familiar):
http://wiki.postgresql.org/wiki/Performance_Optimization
http://linuxfinances.info/info/quickstart.html
http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm


Thank you.
I tried

# sysctl -w kernel.shmmax=419430400
kernel.shmmax = 419430400
# sysctl -n kernel.shmmax
419430400

according to 
http://www.postgresql.org/docs/current/static/kernel-resources.html

this value should be written to /etc/sysctl.conf

I opened /etc/sysctl.conf but it does not contain this value.

How to make this setting persistent ?

Andrus. 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to install latest stable postgresql on Debian

2011-11-18 Thread Andrus
I tried to install latest PostgreSql on plain Debian using instructions from

http://backports-master.debian.org/Instructions/

I added line described there to sources and tried

root@EEPOLDB01:~# apt-get -t squeeze-backports install postgresql-9.1 
Reading package lists... Done Building dependency tree Reading state 
information... Done
E: Unable to locate package postgresql-9.1
E: Couldn't find any package by regex 'postgresql-9.1'

How to install it ?

Link 

http://packages.debian.org/squeeze-backports/postgresql-9.1

shows that this package exists.

There is also postgresql 8.4 installed. 
How to uninstall postgresql 8.4 before installing debian ?

Andrus.

Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-18 Thread Andrus

Ray,

thank you.


Did you do apt-get update after adding the line to the source list?


Yes I tried. I tried it again and it looks like the instructons provided in 
debian site are invalid.

How to fix ?


Andrus.

root@EEPOLDB01:~# cat /etc/apt/sources.list
deb http://www.backports.debian.org/debian-backports squeeze-backports main
# deb cdrom:[Debian GNU/Linux 6.0.1a _Squeeze_ - Official i386 NETINST 
Binary-1 20110320-15:03]/ squeeze main
#deb cdrom:[Debian GNU/Linux 6.0.1a _Squeeze_ - Official i386 NETINST 
Binary-1 20110320-15:03]/ squeeze main

deb http://ftp.ee.debian.org/debian/ squeeze main
deb-src http://ftp.ee.debian.org/debian/ squeeze main
deb http://security.debian.org/ squeeze/updates main
deb-src http://security.debian.org/ squeeze/updates main
# squeeze-updates, previously known as 'volatile'
deb http://ftp.ee.debian.org/debian/ squeeze-updates main
deb-src http://ftp.ee.debian.org/debian/ squeeze-updates main

root@EEPOLDB01:~# apt-get update
Hit http://security.debian.org squeeze/updates Release.gpg
Ign http://security.debian.org/ squeeze/updates/main Translation-aa
Ign http://security.debian.org/ squeeze/updates/main Translation-en
Hit http://security.debian.org squeeze/updates Release
Hit http://security.debian.org squeeze/updates/main Sources
Hit http://security.debian.org squeeze/updates/main i386 Packages
Err http://www.backports.debian.org squeeze-backports Release.gpg
 Could not resolve 'www.backports.debian.org'
Err http://www.backports.debian.org/debian-backports/ squeeze-backports/main 
Translation-aa

 Could not resolve 'www.backports.debian.org'
Err http://www.backports.debian.org/debian-backports/ squeeze-backports/main 
Translation-en

 Could not resolve 'www.backports.debian.org'
Hit http://ftp.ee.debian.org squeeze Release.gpg
Ign http://ftp.ee.debian.org/debian/ squeeze/main Translation-aa
Ign http://ftp.ee.debian.org/debian/ squeeze/main Translation-en
Get:1 http://ftp.ee.debian.org squeeze-updates Release.gpg [836 B]
Ign http://ftp.ee.debian.org/debian/ squeeze-updates/main Translation-aa
Ign http://ftp.ee.debian.org/debian/ squeeze-updates/main Translation-en
Hit http://ftp.ee.debian.org squeeze Release
Get:2 http://ftp.ee.debian.org squeeze-updates Release [113 kB]
Hit http://ftp.ee.debian.org squeeze/main Sources
Hit http://ftp.ee.debian.org squeeze/main i386 Packages
Hit http://ftp.ee.debian.org squeeze-updates/main Sources/DiffIndex
Hit http://ftp.ee.debian.org squeeze-updates/main i386 Packages/DiffIndex
Get:3 http://ftp.ee.debian.org squeeze-updates/main i386 Packages [4,108 B]
Fetched 118 kB in 3s (33.5 kB/s)
Reading package lists... Done
W: Failed to fetch 
http://www.backports.debian.org/debian-backports/dists/squeeze-backports/Release.gpg 
Could not resolve 'www.backports.debian.org'


W: Failed to fetch 
http://www.backports.debian.org/debian-backports/dists/squeeze-backports/main/i18n/Translation-aa.gz 
Could not resolve 'www.backports.debian.org'


W: Failed to fetch 
http://www.backports.debian.org/debian-backports/dists/squeeze-backports/main/i18n/Translation-en.gz 
Could not resolve 'www.backports.debian.org'


W: Some index files failed to download, they have been ignored, or old ones 
used instead.
root@EEPOLDB01:~# 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-18 Thread Andrus

Ray,


Did you do apt-get update after adding the line to the source list?


I fixed this but now another issue arises.
Installaton fails with error below.
How to fix this ?

root@EEPOLDB01:~# apt-get install postgresql-9.1
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
postgresql-9.1 : Depends: libpq5 (= 9.1~beta1) but 8.4.9-0squeeze1 is to be 
installed
 Depends: postgresql-client-9.1 but it is not going to be 
installed
 Depends: postgresql-common (= 115~) but 113 is to be 
installed

E: Broken packages
root@EEPOLDB01:~#

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-18 Thread Andrus

Adrian,

thank you. I'm sorry for stupid mistake.
I uninstalled 8.4. Trying to install 9.1 now returns

root@EEPOLDB01:~# apt-get install postgresql-9.1
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
postgresql-9.1 : Depends: libpq5 (= 9.1~beta1) but 8.4.9-0squeeze1 is to be 
installed
 Depends: postgresql-client-9.1 but it is not going to be 
installed
 Depends: postgresql-common (= 115~) but 113 is to be 
installed

E: Broken packages

How to fix this ?
This is fresh OS install, only Postgresql server will run in this virtual 
maschine.


Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-18 Thread Andrus

How did you uninstall 8.4?
From below it would seem it is still around.


Thank you.
After adding -t switch to apt-get I was able to install 9.1.

To start it I  invoked /etc/init.d/postgresql manually.

How to force it to start after server is rebooted automatically ?

free -g returns

total   used   free sharedbuffers cached
Mem: 2  0  2  0  0  0
-/+ buffers/cache:  0  2
Swap:3  0  3

Which parameters in postgresql.conf needs to be changed to work it as fast 
as posssible ? This is dedicated server.


Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to force some char type columns to be stored in uppercase

2011-11-06 Thread Andrus

in 8.1+ many char(n) type columns must be in uppercase in database.
What is best way to force this ?

Should insert and update triggers created for every table or is there better
way,
for example by creation domain or adding some attribute to column and
creating global trigger?

Andrus 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to find owning schema in function

2011-11-05 Thread Andrus
8.1+ database contains separate schemas for every company named company1, 
company2,  companyi.


order tables in those schemas contain trigger like for company1:

CREATE OR REPLACE FUNCTION dok_seq_trig() RETURNS trigger
AS $$BEGIN
IF NEW.tasudok IS NULL AND NEW.doktyyp!='O'  THEN
NEW.tasudok = nextval( 'company1.'|| TG_RELNAME || '_'|| NEW.doktyyp 
||'_seq');

END IF;

IF NEW.arvenumber IS NULL AND NEW.doktyyp='O'  THEN
NEW.arvenumber = nextval( 'company1.'|| TG_RELNAME || '_'|| NEW.doktyyp 
||'_seq');

END IF;

RETURN NEW;
END$$  LANGUAGE plpgsql STRICT;

This code has hard coded schema name 'company1'  . If new company schema n 
is created from existing one, trigger functions needs manual update to 
change schema to companyn.


How to change this code so that instead of hard-coded schema name it 
automatically uses the schema where trigger function is defined ? 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add xml data to table

2011-10-09 Thread Andrus
There was dblink contrib module which reads data from other PostgreSql 
database using tcp/ip connection.

Maybe there is similar for http/asmx data retrieval ?

About periodic call of stored procedure, is there cron contrib for PosgreSql 
? Or can we force some code call on autofacuum or after
every server request like poor man crontab in drupal/php? This code can 
check and call refresh on evry hour.


Can we add some trigger code for some frequently used table or view to 
implement poor man crontab ?


Can some serve sider langeage used for those ?
Is server side C#/.NET,Mono already implemented ?

Andrus.



-Algsõnum- 
From: Francisco Figueiredo Jr.

Sent: Saturday, October 08, 2011 11:26 PM
To: Andrus
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How to add xml data to table

I think this approach is much better as you can solve everything on
server itself.

About your question on http request I don't know.
Sorry for that. :(
Maybe there is a module for Postgresql which can enable you to make http 
calls?




On Sat, Oct 8, 2011 at 17:15, Andrus kobrule...@hot.ee wrote:

Thank you.
I got also the following code:

First import the XML into a staging table:

CREATE TABLE xml_import
(
 xml_data  xml
)

with product_list as (
 select
unnest(xpath('/soap12:Envelope/soap12:Body/pl:GetProductListResponse/pl:GetProductListResult/pl:ProductList/pl:Product',
xml_data,
ARRAY[ array['xsd', 'http://www.w3.org/2001/XMLSchema-instance'],
   array['soap12', 'http://www.w3.org/2003/05/soapenvelope'],
   array['pl', 'http://xxx.yy.zz/']])) as product
 from xml_import
)
select (xpath('/Product/SupplierCode/text()', product)::varchar[])[1] as
suppliercode,
 (xpath('/Product/SegmentId/text()', product)::varchar[])[1] as
segmentid,
 (xpath('/Product/PartNumber/text()', product)::varchar[])[1] as
partnumber,
 to_number((xpath('/Product/Price/text()', product)::varchar[])[1],
'9.9') as price,
 to_number((xpath('/Product/GrossWeight/text()',
product)::varchar[])[1], '.') as weight
from product_list

Looks simpler than using XmlReader, isn't it?
How to invoke asmx web service call (= http POST request) from
PostgreSql server  which reads http response to xml_import  table ?

How to call stored procedure periodically after every one hour in server?

In this case we can create stored procedure, client side code is not
nessecary at
all.

Andrus.

-Algsõnum- From: Francisco Figueiredo Jr.
Sent: Saturday, October 08, 2011 9:38 PM
To: Andrus Moor
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How to add xml data to table

I think your best bet would be to go with XmlReader as it provides a
fast read only parsing of the document.

From MS doc about linq to xml:
http://msdn.microsoft.com/en-us/library/bb387048.aspx

XmlReader is a fast, forward-only, non-caching parser.

LINQ to XML is implemented on top of XmlReader, and they are tightly
integrated. However, you can also use XmlReader by itself.

For example, suppose you are building a Web service that will parse
hundreds of XML documents per second, and the documents have the same
structure, meaning that you only have to write one implementation of
the code to parse the XML. In this case, you would probably want to
use XmlReader by itself.

In contrast, if you are building a system that parses many smaller XML
documents, and each one is different, you would want to take advantage
of the productivity improvements that LINQ to XML provides.


I think your case fits the first example.

This way you could use xmlreader to extract the values and then fill
NpgsqlParameter values and execute the insert command.

I hope it helps.



2011/10/7 Andrus Moor eetas...@online.ee:


soap response below contains table of products, approx 5000 rows.
Table of products (below) is nearly similar structure as xml data.

Products table needs to be updated from xml data in every hour.

How to add this xml data to table of products ?

Should I use xpath() function or any other ides ?
Using npgsql and C# in ASP .NET / Mono.

Andrus.


CREATE TABLE products (
SupplierCode char(20) primary key,
SegmentId char(8),
GroupId char(8),
ClassId char(8),
SeriesId char(8),
VendorId char(2),
PartNumbrt char(27),
Name Text,
Warranty Numeric(6,2),
Price Numeric(10,4),
Quantity Numeric(8,2)
)

Data which is required to add looks like:

?xml version=1.0 encoding=utf-8?
soap12:Envelope xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xmlns:xsd=http://www.w3.org/2001/XMLSchema;
xmlns:soap12=http://www.w3.org/2003/05/soapenvelope;
soap12:Body
GetProductListResponse xmlns=http://xxx.yy.zz/;
GetProductListResult
ProductList
Product
SupplierCode001982/SupplierCode
SegmentId6500/SegmentId
GroupId6501/GroupId
ClassId65010200/ClassId
SeriesId10001125/SeriesId
VendorIdAM/VendorId
PartNumberADA3000BIBOX/PartNumber
NameAMD Athlon64 3000+ (1800MHz/L2 Cache 512KB) Socket 939, BOX/Name
Warranty36/Warranty
Price196.0/Price

Re: [GENERAL] How to add xml data to table

2011-10-09 Thread Andrus

Thank you.
I got also the following code:

First import the XML into a staging table:

CREATE TABLE xml_import
(
  xml_data  xml
)

with product_list as (
 select
unnest(xpath('/soap12:Envelope/soap12:Body/pl:GetProductListResponse/pl:GetProductListResult/pl:ProductList/pl:Product',
xml_data,
 ARRAY[ array['xsd', 'http://www.w3.org/2001/XMLSchema-instance'],
array['soap12', 'http://www.w3.org/2003/05/soapenvelope'],
array['pl', 'http://xxx.yy.zz/']])) as product
 from xml_import
)
select (xpath('/Product/SupplierCode/text()', product)::varchar[])[1] as
suppliercode,
  (xpath('/Product/SegmentId/text()', product)::varchar[])[1] as
segmentid,
  (xpath('/Product/PartNumber/text()', product)::varchar[])[1] as
partnumber,
  to_number((xpath('/Product/Price/text()', product)::varchar[])[1],
'9.9') as price,
  to_number((xpath('/Product/GrossWeight/text()',
product)::varchar[])[1], '.') as weight
from product_list

Looks simpler than using XmlReader, isn't it?
How to invoke asmx web service call (= http POST request) from
PostgreSql server  which reads http response to xml_import  table ?

How to call stored procedure periodically after every one hour in server?

In this case we can create stored procedure, client side code is not 
nessecary at

all.

Andrus.

-Algsõnum- 
From: Francisco Figueiredo Jr.

Sent: Saturday, October 08, 2011 9:38 PM
To: Andrus Moor
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How to add xml data to table

I think your best bet would be to go with XmlReader as it provides a
fast read only parsing of the document.


From MS doc about linq to xml:

http://msdn.microsoft.com/en-us/library/bb387048.aspx

XmlReader is a fast, forward-only, non-caching parser.

LINQ to XML is implemented on top of XmlReader, and they are tightly
integrated. However, you can also use XmlReader by itself.

For example, suppose you are building a Web service that will parse
hundreds of XML documents per second, and the documents have the same
structure, meaning that you only have to write one implementation of
the code to parse the XML. In this case, you would probably want to
use XmlReader by itself.

In contrast, if you are building a system that parses many smaller XML
documents, and each one is different, you would want to take advantage
of the productivity improvements that LINQ to XML provides.


I think your case fits the first example.

This way you could use xmlreader to extract the values and then fill
NpgsqlParameter values and execute the insert command.

I hope it helps.



2011/10/7 Andrus Moor eetas...@online.ee:

soap response below contains table of products, approx 5000 rows.
Table of products (below) is nearly similar structure as xml data.

Products table needs to be updated from xml data in every hour.

How to add this xml data to table of products ?

Should I use xpath() function or any other ides ?
Using npgsql and C# in ASP .NET / Mono.

Andrus.


CREATE TABLE products (
SupplierCode char(20) primary key,
SegmentId char(8),
GroupId char(8),
ClassId char(8),
SeriesId char(8),
VendorId char(2),
PartNumbrt char(27),
Name Text,
Warranty Numeric(6,2),
Price Numeric(10,4),
Quantity Numeric(8,2)
)

Data which is required to add looks like:

?xml version=1.0 encoding=utf-8?
soap12:Envelope xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xmlns:xsd=http://www.w3.org/2001/XMLSchema;
xmlns:soap12=http://www.w3.org/2003/05/soapenvelope;
soap12:Body
GetProductListResponse xmlns=http://xxx.yy.zz/;
GetProductListResult
ProductList
Product
SupplierCode001982/SupplierCode
SegmentId6500/SegmentId
GroupId6501/GroupId
ClassId65010200/ClassId
SeriesId10001125/SeriesId
VendorIdAM/VendorId
PartNumberADA3000BIBOX/PartNumber
NameAMD Athlon64 3000+ (1800MHz/L2 Cache 512KB) Socket 939, BOX/Name
Warranty36/Warranty
Price196.0/Price
Quantity0/Quantity
DateExpected1999-01-01T00:00:00/DateExpected
IsNewProducttrue/IsNewProduct
/Product
Product
SupplierCode001512/SupplierCode
SegmentId6500/SegmentId
GroupId6501/GroupId
ClassId65010200/ClassId
SeriesId10001125/SeriesId
VendorIdAM/VendorId
Acme API Specification v 1.0
13
PartNumberADA3000AXBOX/PartNumber
NameAMD Athlon64 3000+ (2000MHz/1600MHz/L2 Cache 512KB) Socket 754,
BOX/Name
Warranty36/Warranty
Price296.0/Price
Quantity0/Quantity
GrossWeight3.6000/GrossWeight
DateExpected1999-01-01T00:00:00/DateExpected
IsNewProductfalse/IsNewProduct
/Product
/ProductList
/GetProductListResult
/GetProductListResponse
/soap12:Body
/soap12:Envelope




--
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://gplus.to/franciscojunior
http://fxjr.blogspot.com
http://twitter.com/franciscojunior 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to add xml data to table

2011-10-07 Thread Andrus Moor
soap response below contains table of products, approx 5000 rows.
Table of products (below) is nearly similar structure as xml data.

Products table needs to be updated from xml data in every hour.

How to add this xml data to table of products ?

Should I use xpath() function or any other ides ?
Using npgsql and C# in ASP .NET / Mono.

Andrus.


CREATE TABLE products (
SupplierCode char(20) primary key,
SegmentId char(8),
GroupId char(8),
ClassId char(8),
SeriesId char(8),
VendorId char(2),
PartNumbrt char(27),
Name Text,
Warranty Numeric(6,2),
Price Numeric(10,4),
Quantity Numeric(8,2)
)

Data which is required to add looks like:

?xml version=1.0 encoding=utf-8?
soap12:Envelope xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xmlns:xsd=http://www.w3.org/2001/XMLSchema; 
xmlns:soap12=http://www.w3.org/2003/05/soapenvelope;
soap12:Body
GetProductListResponse xmlns=http://xxx.yy.zz/;
GetProductListResult
ProductList
Product
SupplierCode001982/SupplierCode
SegmentId6500/SegmentId
GroupId6501/GroupId
ClassId65010200/ClassId
SeriesId10001125/SeriesId
VendorIdAM/VendorId
PartNumberADA3000BIBOX/PartNumber
NameAMD Athlon64 3000+ (1800MHz/L2 Cache 512KB) Socket 939, BOX/Name
Warranty36/Warranty
Price196.0/Price
Quantity0/Quantity
DateExpected1999-01-01T00:00:00/DateExpected
IsNewProducttrue/IsNewProduct
/Product
Product
SupplierCode001512/SupplierCode
SegmentId6500/SegmentId
GroupId6501/GroupId
ClassId65010200/ClassId
SeriesId10001125/SeriesId
VendorIdAM/VendorId
Acme API Specification v 1.0
13
PartNumberADA3000AXBOX/PartNumber
NameAMD Athlon64 3000+ (2000MHz/1600MHz/L2 Cache 512KB) Socket 754, BOX/Name
Warranty36/Warranty
Price296.0/Price
Quantity0/Quantity
GrossWeight3.6000/GrossWeight
DateExpected1999-01-01T00:00:00/DateExpected
IsNewProductfalse/IsNewProduct
/Product
/ProductList
/GetProductListResult
/GetProductListResponse
/soap12:Body
/soap12:Envelope

  1   2   3   4   5   6   >