Re: [GENERAL] Corrupted Dabatabase

2016-06-27 Thread Sterpu Victor

You probably have a HDD problem.
Try a "cat /proc/mounts" and see if partition is mounted read only.

-- Original Message --
From: "Pau Marc Muñoz Torres" 
To: "pgsql general" 
Sent: 27/6/2016 4:28:35 PM
Subject: [GENERAL] Corrupted Dabatabase


Hello Everydody,

 My database is corrupted, each time i try to list all the tables in 
the database  i have got the following message


hla2db=> \d
ERROR:  could not open relation 1663/16385/1255: Read-only file system

I read that my best option is to use my backup. Before using my backup 
i would like to remove all databases in postgresql and create 
everything again from scratch. Unfortunatlly i can't see the list of 
databases, althought i am able to create databse




 can you suggest what to do? is it a good idea to reinstall postgresql?


Pau Marc Muñoz Torres
skype: pau_marc
http://www.linkedin.com/in/paumarc
http://www.researchgate.net/profile/Pau_Marc_Torres3/info/


Re: [GENERAL] Slow query when the select list is big

2016-05-09 Thread Sterpu Victor
It works fine now, on my test server execution time went down from 6.4 
seconds to 1.4 seconds and on the production server went down from 3.2 
sec to 600ms.
To optimize the query I changed the order of some joins(the joins that 
where used to limit rows are at the begining of the query)

I tried some of these parameters, I will try all tomorow.

-- Original Message --
From: "Karl Czajkowski" <kar...@isi.edu>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "PostgreSQL General" <pgsql-general@postgresql.org>
Sent: 9/5/2016 8:47:12 PM
Subject: Re: Slow query when the select list is big


On May 09, Sterpu Victor modulated:

 I tested it now, EXPLAIN ANALYZE is not showing LEFT JOIN at all if
 I don't select from the joined tables.
 Now is clear why the query is so mutch more efficient when I select
 less data.

 Thank you



With so many joins, you may want to experiment with postgresql
parameter tuning. These parameters in particular can have a
significant impact on the plan choice and execution time:

   work_mem
   effective_cache_size

   from_collapse_limit
   join_collapse_limit

   geqo_threshold
   geqo_effort

Setting these to appropriately large values can make analytic queries
run much faster. Of course, setting them too high can also make for
very bad plans which cause the DB server to over subscribe its memory
and start swapping... it requires a bit of reading and a bit of
experimentation to find ideal settings for your environment.


Karl





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow query when the select list is big

2016-05-09 Thread Sterpu Victor
I tested it now, EXPLAIN ANALYZE is not showing LEFT JOIN at all if I 
don't select from the joined tables.
Now is clear why the query is so mutch more efficient when I select less 
data.


Thank you

-- Original Message --
From: "David Rowley" <david.row...@2ndquadrant.com>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "Rob Imig" <rimi...@gmail.com>; "PostgreSQL General" 
<pgsql-general@postgresql.org>; "David G. Johnston" 
<david.g.johns...@gmail.com>

Sent: 9/5/2016 10:04:54 AM
Subject: Re: [GENERAL] Slow query when the select list is big

On 9 May 2016 at 18:46, David G. Johnston <david.g.johns...@gmail.com> 
wrote:

 On Sunday, May 8, 2016, Sterpu Victor <vic...@caido.ro> wrote:


 Yes but it is very big.
 I don't understand why the select list is influencing the CPU usage.
 I was expecting that only the join and where clauses would influence 
CPU.




 PostgreSQL is smart enough to optimize away stuff that it knows 
doesn't

 impact the final query result.


To be more accurate with what David is saying, PostgreSQL will remove
unused LEFT JOINed relations where the left joined relation can be
proved to not duplicate rows from the right hand side. It would just
be a matter of comparing the EXPLAINs from the query with all the
SELECT items to the one with the single SELECT item to prove that this
is what's happening.

Please also note that this only occurs with LEFT JOINs

It would also be quite helpful for people if you were to include a
copy of the query. It's impossible to reverse engineer what that is
from this EXPLAIN output. I see that your using a windowing function
and performing a LIMIT 1, there may be ways to improve that just by
selecting the single highest j1031101.validfrom row and performing the
joins to the other table on that single row, but that will depend on
which windowing function you're using as the function may require the
other rows in the window frame to calculate the correct result.

--
 David Rowley http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow query when the select list is big

2016-05-09 Thread Sterpu Victor
Acctualy the optimization should be cauzed by the filters on the joins 
that have been moved at the begining of the query.

So now postres is making a filter before joining a lot of data.
The fact that these join produce multiple rows is not relevant.
At least this is what I think is heapening.

-- Original Message --
From: "Sterpu Victor" <vic...@caido.ro>
To: "Sterpu Victor" <vic...@caido.ro>; "David Rowley" 
<david.row...@2ndquadrant.com>
Cc: "Rob Imig" <rimi...@gmail.com>; "PostgreSQL General" 
<pgsql-general@postgresql.org>; "David G. Johnston" 
<david.g.johns...@gmail.com>

Sent: 9/5/2016 11:01:56 AM
Subject: Re[2]: [GENERAL] Slow query when the select list is big

I went to 2.4 seconds by joining first the tables that produce many 
rows.


SELECT
row_number() OVER (ORDER BY J1031101.validfrom DESC) AS nrcrt ,
J1033386.name AS sectie_internare ,
J1033387.name AS sectie_externare ,
TO_CHAR(J1031101.validfrom , '-MM-DD HH24:MI') AS validfrom ,
TO_CHAR(J1033359.validto , '-MM-DD HH24:MI') AS validto ,
CASE WHEN(CHAR_LENGTH(J1031074.signature) > 10) THEN(1) ELSE(0) END AS 
semnat_internare ,
CASE WHEN(CHAR_LENGTH(J1031074.signature_externare) > 10) THEN(1) 
ELSE(0) END AS semnat_externare ,

J1031076.name AS sex ,
J1031074.id AS id ,
J1031074.siui_appid AS siui_appid ,
J1031074.data_adeverinta AS data_adeverinta ,
J1031074.is_paliativ AS text_paliativ ,
J1031074.cardno AS cardno ,
J1031074.cardno_externare AS cardno_externare ,
J1031074.sign_date AS sign_date ,
J1031074.sign_date_externare AS sign_date_externare ,
J1031074.unsigned_string AS unsigned_string ,
J1031074.unsigned_string_externare AS unsigned_string_externare ,
J1031074.signhash AS signhash ,
J1031074.signhash_externare AS signhash_externare ,
J1031074.signature AS signature ,
J1031074.signature_externare AS signature_externare ,
J1031074.send_xml AS send_xml ,
J1031074.send_xml_externare AS send_xml_externare ,
J1031074.received_xml AS received_xml ,
J1031074.received_xml_externare AS received_xml_externare ,
J1031074.error AS error ,
J1031074.error_externare AS error_externare ,
J1031074.validat AS validat ,
J1031074.validat_externare AS validat_externare ,
J1031074.online AS online ,
J1031074.online_externare AS online_externare ,
J1031074.serie_bilet_internare AS text_serie_bilet_internare ,
J1031074.nr_bilet_internare AS text_numar_bilet_internare ,
J1031074.idpatient AS _popup_cnp_pacient ,
J1031075.cnp AS popup_cnp_pacient ,
J1031075.name AS text_nume_pacient ,
J1031075.surname AS text_prenume_pacient ,
J1031074.nrfo AS text_numar_fosz ,
J1031074.greutate_nastere AS text_greutate_nastere ,
J1031078.value AS popup_tip_asigurare ,
J1031074.idensuredstatustype AS _popup_tip_asigurare ,
J1031079.value AS popup_statut_asigurat ,
J1031074.idensuredstatus AS _popup_statut_asigurat ,
J1031080.code AS popup_cas_asigurat ,
J1031074.id_org_unit AS _popup_cas_asigurat ,
J1031081.code AS popup_categorie_asigurare ,
J1031074.id_categorie_asigurat AS _popup_categorie_asigurare ,
J1031082.name AS popup_tip_internare ,
J1031074.id_focg_tip_internare AS _popup_tip_internare ,
J1031083.name AS popup_criteriu_internare ,
J1031074.id_focg_criteriu_internare AS _popup_criteriu_internare ,
J1031084.stencil_no AS popup_medic_curant ,
J1031084.id AS _popup_medic_curant ,
J1031089.value AS popup_nivel_instruire ,
J1031074.id_education_level AS _popup_nivel_instruire ,
J1031074.greutate AS text_greutate_internare ,
J1031090.nume AS popup_situatii_speciale ,
J1031074.id_focg_situatii_speciale AS _popup_situatii_speciale ,
J1031091.nume AS popup_internat_prin ,
J1031091.id AS _popup_internat_prin ,
J1031092.nume AS popup_formulare_europene ,
J1031074.id_formulare_europene AS _popup_formulare_europene ,
J1031074.id_cnp_mama AS _popup_cnp_mama ,
J1031094.cnp AS popup_cnp_mama ,
J1031093.nrfo AS popup_fo_mama ,
J1031074.id_focg AS _popup_fo_mama ,
J1031074.nr_card_euro AS text_nr_card_european ,
J1031074.nr_pasaport AS text_nr_pasaport ,
J1031074.nr_card_national AS text_nr_card_national ,
J1031088.id AS _popup_ocupatia ,
J1031088.name AS popup_ocupatia ,
J1031074.export_drg AS export_drg ,
J1031074.drgcaseid AS drgcaseid ,
J1031074.export_ecosoft AS export_ecosoft ,
J1031074.mesaj_drg AS mesaj_drg ,
J1031074.uid AS uid ,
J1031074.mesaj_ecosoft AS mesaj_ecosoft ,
J1031074.id_address_domiciliu AS text_id_address_domiciliu ,
J1031074.id_address_domiciliu AS _text_id_address_domiciliu ,
J1031074.id_address_resedinta AS _text_id_address_resedinta ,
J1031074.id_address_resedinta AS text_id_address_resedinta ,
'0' AS marcator ,
J1031095.id AS _popup_sursa_internare ,
J1031095.denumire AS popup_sursa_internare ,
J1031096.id AS _popup_diseasecategory ,
J1031096.code AS popup_diseasecategory ,
J1031097.id AS _popup_diagnostic_internare_icd10 ,
J1031097.name AS popup_diagnostic_internare_icd10 ,
J1031098.id AS _popup_mod_contract ,
J1031098.description AS

Re: [GENERAL] Slow query when the select list is big

2016-05-09 Thread Sterpu Victor
 = 
J1031074.id_focg_contract_modes)
   LEFT JOIN focg_criterii_urgenta AS J1031099 ON ( J1031099.id = 
J1031074.id_focg_criterii_urgenta)
   LEFT JOIN exceptie_bilet_internare AS J1031100 ON ( J1031100.id = 
J1031074.id_exceptie_bilet_internare)
   LEFT JOIN focg_tip_externare AS J1031736 ON ( J1031736.id = 
J1031074.id_focg_tip_externare)
   LEFT JOIN focg_stare_externare AS J1031737 ON ( J1031737.id = 
J1031074.id_focg_stare_externare)
   LEFT JOIN physicians AS J1033295 ON ( J1033295.id = 
J1031074.idphysician_surgeon)
   LEFT JOIN hospital_release_statuses AS J1033299 ON ( J1033299.id = 
J1031074.id_hospital_release_statuses)
   LEFT JOIN citizenship AS J1033304 ON ( J1033304.id = 
J1031074.id_citizenship)
   LEFT JOIN drg_tip_cetatenie AS J1033358 ON ( J1033358.id = 
J1031074.id_drg_tip_cetatenie)
   LEFT JOIN stari_spitalizari AS J1033704 ON ( J1033704.id = 
J1031074.id_stari_spitalizari)


 WHERE   J1031102.id IS NULL   ANDJ1033360.id IS NULL   AND(
(   DATE(J1031101.validfrom)>= DATE('2016-05-01')   AND   
DATE(J1031101.validfrom)<= DATE('2016-05-31 00:00:00.0')   )OR(  
  J1033359.validto IS NOT NULL   AND   DATE(J1033359.validto)>= 
DATE('2016-05-01')   AND   DATE(J1033359.validto)<= DATE('2016-05-31 
00:00:00.0')   )OR(J1033359.validto IS NULL   AND   
DATE(J1031101.validfrom)<= DATE('2016-05-01')   ))


   ORDER BY J1031101.validfrom DESC
 LIMIT  20 OFFSET 0


-- Original Message --
From: "Sterpu Victor" <vic...@caido.ro>
To: "David Rowley" <david.row...@2ndquadrant.com>
Cc: "Rob Imig" <rimi...@gmail.com>; "PostgreSQL General" 
<pgsql-general@postgresql.org>; "David G. Johnston" 
<david.g.johns...@gmail.com>

Sent: 9/5/2016 10:44:45 AM
Subject: Re: [GENERAL] Slow query when the select list is big


I solved the problem patialy by swithing the order of a join.
I tested on a slower server to see better the difference.
After moving a single join the query runs in 4.1 seconds insted 6.4 
seconds.
I pasted the optimized query down. When I move the join J1033704 at the 
end(last join) the time is increased to 6.4 seconds.


SELECT
row_number() OVER (ORDER BY J1031101.validfrom DESC) AS nrcrt ,
CASE WHEN(CHAR_LENGTH(J1031074.signature) > 10) THEN(1) ELSE(0) END AS 
semnat_internare ,
CASE WHEN(CHAR_LENGTH(J1031074.signature_externare) > 10) THEN(1) 
ELSE(0) END AS semnat_externare ,

J1031076.name AS sex ,
J1031074.id AS id ,
J1031074.siui_appid AS siui_appid ,
J1031074.data_adeverinta AS data_adeverinta ,
J1031074.is_paliativ AS text_paliativ ,
J1031074.cardno AS cardno ,
J1031074.cardno_externare AS cardno_externare ,
J1031074.sign_date AS sign_date ,
J1031074.sign_date_externare AS sign_date_externare ,
J1031074.unsigned_string AS unsigned_string ,
J1031074.unsigned_string_externare AS unsigned_string_externare ,
J1031074.signhash AS signhash ,
J1031074.signhash_externare AS signhash_externare ,
J1031074.signature AS signature ,
J1031074.signature_externare AS signature_externare ,
J1031074.send_xml AS send_xml ,
J1031074.send_xml_externare AS send_xml_externare ,
J1031074.received_xml AS received_xml ,
J1031074.received_xml_externare AS received_xml_externare ,
J1031074.error AS error ,
J1031074.error_externare AS error_externare ,
J1031074.validat AS validat ,
J1031074.validat_externare AS validat_externare ,
J1031074.online AS online ,
J1031074.online_externare AS online_externare ,
J1031074.serie_bilet_internare AS text_serie_bilet_internare ,
J1031074.nr_bilet_internare AS text_numar_bilet_internare ,
J1031074.idpatient AS _popup_cnp_pacient ,
J1031075.cnp AS popup_cnp_pacient ,
J1031075.name AS text_nume_pacient ,
J1031075.surname AS text_prenume_pacient ,
J1031074.nrfo AS text_numar_fosz ,
J1031074.greutate_nastere AS text_greutate_nastere ,
J1031078.value AS popup_tip_asigurare ,
J1031074.idensuredstatustype AS _popup_tip_asigurare ,
J1031079.value AS popup_statut_asigurat ,
J1031074.idensuredstatus AS _popup_statut_asigurat ,
J1031080.code AS popup_cas_asigurat ,
J1031074.id_org_unit AS _popup_cas_asigurat ,
J1031081.code AS popup_categorie_asigurare ,
J1031074.id_categorie_asigurat AS _popup_categorie_asigurare ,
J1031082.name AS popup_tip_internare ,
J1031074.id_focg_tip_internare AS _popup_tip_internare ,
J1031083.name AS popup_criteriu_internare ,
J1031074.id_focg_criteriu_internare AS _popup_criteriu_internare ,
J1031084.stencil_no AS popup_medic_curant ,
J1031084.id AS _popup_medic_curant ,
J1031089.value AS popup_nivel_instruire ,
J1031074.id_education_level AS _popup_nivel_instruire ,
J1031074.greutate AS text_greutate_internare ,
J1031090.nume AS popup_situatii_speciale ,
J1031074.id_focg_situatii_speciale AS _popup_situatii_speciale ,
J1031091.nume AS popup_internat_prin ,
J1031091.id AS _popup_internat_prin ,
J1031092.nume AS popup_formulare_europene ,
J1031074.id_formulare_europene AS _popup_formulare_europene ,
J1031074.id_cnp_mama AS _popu

Re: [GENERAL] Slow query when the select list is big

2016-05-09 Thread Sterpu Victor
T JOIN focgdepartment AS J1031101 ON ( J1031101.idfocg = 
J1031074.id)
   LEFT JOIN department AS J1033386 ON ( J1033386.id = 
J1031101.iddepartment)
   LEFT JOIN focgdepartment AS J1031102 ON ( J1031102.idfocg = 
J1031074.id AND  J1031102.validfrom < J1031101.validfrom)
   LEFT JOIN focgdepartment AS J1033359 ON ( J1033359.idfocg = 
J1031074.id)
   LEFT JOIN department AS J1033387 ON ( J1033387.id = 
J1033359.iddepartment)
   LEFT JOIN focgdepartment AS J1033360 ON ( J1033360.idfocg = 
J1031074.id AND  J1033360.validfrom > J1033359.validfrom)
JOIN focgdepartment AS J1033407 ON ( J1033407.idfocg = J1031074.id 
AND  J1033407.iddepartment IN (/*Executing QueryID 476590*/SELECT  
DISTINCT(J1034477.id)  AS  ret FROM department AS J1034477  JOIN 
personnel_department AS J1034478 ON ( J1034478.id_department = 
J1034477.id)
  JOIN personnel AS J1034479 ON ( J1034479.id = J1034478.id_personnel 
AND  J1034479.id_connected_user = '1')))


 WHERE   J1031102.id IS NULL   ANDJ1033360.id IS NULL   AND(
(   DATE(J1031101.validfrom)>= DATE('2016-05-01')   AND   
DATE(J1031101.validfrom)<= DATE('2016-05-31 00:00:00.0')   )OR(  
  J1033359.validto IS NOT NULL   AND   DATE(J1033359.validto)>= 
DATE('2016-05-01')   AND   DATE(J1033359.validto)<= DATE('2016-05-31 
00:00:00.0')   )OR(J1033359.validto IS NULL   AND   
DATE(J1031101.validfrom)<= DATE('2016-05-01')   ))


   ORDER BY J1031101.validfrom DESC
 LIMIT  20 OFFSET 0

-- Original Message --
From: "David Rowley" <david.row...@2ndquadrant.com>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "Rob Imig" <rimi...@gmail.com>; "PostgreSQL General" 
<pgsql-general@postgresql.org>; "David G. Johnston" 
<david.g.johns...@gmail.com>

Sent: 9/5/2016 10:04:54 AM
Subject: Re: [GENERAL] Slow query when the select list is big

On 9 May 2016 at 18:46, David G. Johnston <david.g.johns...@gmail.com> 
wrote:

 On Sunday, May 8, 2016, Sterpu Victor <vic...@caido.ro> wrote:


 Yes but it is very big.
 I don't understand why the select list is influencing the CPU usage.
 I was expecting that only the join and where clauses would influence 
CPU.




 PostgreSQL is smart enough to optimize away stuff that it knows 
doesn't

 impact the final query result.


To be more accurate with what David is saying, PostgreSQL will remove
unused LEFT JOINed relations where the left joined relation can be
proved to not duplicate rows from the right hand side. It would just
be a matter of comparing the EXPLAINs from the query with all the
SELECT items to the one with the single SELECT item to prove that this
is what's happening.

Please also note that this only occurs with LEFT JOINs

It would also be quite helpful for people if you were to include a
copy of the query. It's impossible to reverse engineer what that is
from this EXPLAIN output. I see that your using a windowing function
and performing a LIMIT 1, there may be ways to improve that just by
selecting the single highest j1031101.validfrom row and performing the
joins to the other table on that single row, but that will depend on
which windowing function you're using as the function may require the
other rows in the window frame to calculate the correct result.

--
 David Rowley http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Slow query when the select list is big

2016-05-08 Thread Sterpu Victor
I have a big query that takes about 7 seconds to run(time sending the 
data to the client is not counted).
Postgres uses 100% of 1 CPU when solving this query. I tried to run the 
query on a HDD and on a SSD with no difference. HDD show about 10% usage 
while the query runs.


The query has a big "select" list and no "group by" clause. If I delete 
all selects except one the query runs in under a second(it doesn't 
matter what field remains selected).
It seems that the query is slow because of the long select, can I do 
something to make the query faster?


Thank you.


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread Sterpu Victor

-- Original Message --
From: "Emre Hasegeli" <e...@hasegeli.com>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "PostgreSQL General" <pgsql-general@postgresql.org>
Sent: 28/3/2016 12:06:23 PM
Subject: Re: [GENERAL] Unique values on multiple tables


 I have 2 tables and I must make asure unique values like this.

 table1
 id
 nr - integer

 table2
 id
 id_table1 - FK in Table 1
 valid_from - timestamp

 There must be unique values for:
 - nr - from table1
 and
 - YEAR(MIN(valid_from)) from table 2


In situations like this, I add the required column to the other table
with a foreign key. Assuming that (id) is the primary key of table1,
you would need another unique key on (nr, id). Than you can add nr
column to table2 by changing the foreign key to (nr, id_table1)
references table1 (nr, id).

Obviously, its not an efficient solution. It requires an additional
unique key and more storage on the referencing table. Though, I
believe it is a safe one. It doesn't allow the duplicated column to
be inconsistent. There are many things that can go wrong under
concurrency with a trigger like you posted.


This wouldn't work in my case because I need unique values only for the 
smallest valid_from.

Example:
- table1 has row id=1, nr=100
- table2 has row1 id=5, id_table1=1, valid_from=2015-12-01
  row2 id=6, id_table1=1, valid_from=2016-01-01
Then unique values must be assured only for (100, 2015-12-01), ignorig 
the second valid_from(2016-01-01)


I changed the execution time of the function from BEFORE to AFTER and I 
hope this will solve the problem.
I don't know how postgres works behind this code but I hope that this 
will solve the problem.
Could I use lock tables to fix this? Is postgres automaticaly locking a 
table while running a trigger on that table?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unique values on multiple tables

2016-03-28 Thread Sterpu Victor
I think I fixed the problem by executing the function AFTER insert or 
update but I'm not sure.

Until now the execution was before insert or update.

-- Original Message --
From: "Sterpu Victor" <vic...@caido.ro>
To: "PostgreSQL General" <pgsql-general@postgresql.org>
Sent: 28/3/2016 9:32:17 AM
Subject: [GENERAL] Unique values on multiple tables


Hello

I have 2 tables and I must make asure unique values like this.

table1
id
nr - integer

table2
id
id_table1 - FK in Table 1
valid_from - timestamp

There must be unique values for:
- nr - from table1
and
- YEAR(MIN(valid_from)) from table 2

I already made this with a function but in rare cases the function 
fails: when the insert time is very close for 2 inserts the check will 
fail and I will have 2 numbers on the same year.

How should I fix this?

This is the function:

CREATE OR REPLACE FUNCTION table2_check_uni_func()  RETURNS trigger AS
$BODY$
DECLARE
currenr_nr_fo integer;
current_id integer;
BEGIN
IF( (SELECT id FROM table2 WHERE id_table1=NEW.id_table1 ORDER BY 
valid_from ASC LIMIT 1) = NEW.id )  THEN /*IF the first valid from is 
edited*/
 SELECT INTO currenr_nr_fo f.nrfo FROM table1 f WHERE f.id 
= NEW.id_table1;

 IF( (SELECT count(*) as nr
  FROM table1 f
  JOIN table2 fd1 ON (fd1.id_table1 = f.id AND 
to_char(fd1.valid_from, '')=TO_CHAR(NEW.valid_from, ''))
  LEFT JOIN table2 fd2 ON (fd2.id_table1 = f.id AND 
fd2.valid_from<fd1.valid_from)
  WHERE f.nr = currenr_nr_fo AND f.id!=NEW.id_table1 
AND fd2.id IS NULL) > 0 ) THEN  RAISE EXCEPTION 'Nr % already used', 
currenr_nr_fo ;

 ELSE
  RETURN NEW;
 END IF;
 ELSE
  RETURN NEW;
END IF;
END;

Thank you.


DISCLAIMER:
Acest mesaj de posta electronica si documentele aferente sunt 
confidentiale. Este interzisa distribuirea, dezvaluirea sau orice alt 
mod de utilizare a lor. Daca nu sunteti destinatarul acestui mesaj, 
este interzis sa actionati in baza acestor informatii. Citirea, 
copierea, distribuirea, dezvaluirea sau utilizarea in alt mod a 
informatiei continute in acest mesaj constituie o incalcare a legii. 
Daca ati primit mesajul din greseala, va rugam sa il distrugeti, 
anuntand expeditorul de eroarea comisa. Intrucat nu poate fi garantat 
faptul ca posta electronica este un mod sigur si lipsit de erori de 
transmitere a informatiilor, este responsabilitatea dvs. sa va 
asigurati ca mesajul (inclusiv documentele alaturate lui) este validat 
si autorizat spre a fi utilizat in mediul dvs.





[GENERAL] Unique values on multiple tables

2016-03-28 Thread Sterpu Victor

Hello

I have 2 tables and I must make asure unique values like this.

table1
id
nr - integer

table2
id
id_table1 - FK in Table 1
valid_from - timestamp

There must be unique values for:
- nr - from table1
and
- YEAR(MIN(valid_from)) from table 2

I already made this with a function but in rare cases the function 
fails: when the insert time is very close for 2 inserts the check will 
fail and I will have 2 numbers on the same year.

How should I fix this?

This is the function:

CREATE OR REPLACE FUNCTION table2_check_uni_func()  RETURNS trigger AS
$BODY$
DECLARE
currenr_nr_fo integer;
current_id integer;
BEGIN
IF( (SELECT id FROM table2 WHERE id_table1=NEW.id_table1 ORDER BY 
valid_from ASC LIMIT 1) = NEW.id )  THEN /*IF the first valid from is 
edited*/
 SELECT INTO currenr_nr_fo f.nrfo FROM table1 f WHERE f.id = 
NEW.id_table1;

 IF( (SELECT count(*) as nr
  FROM table1 f
  JOIN table2 fd1 ON (fd1.id_table1 = f.id AND 
to_char(fd1.valid_from, '')=TO_CHAR(NEW.valid_from, ''))
  LEFT JOIN table2 fd2 ON (fd2.id_table1 = f.id AND 
fd2.valid_from 0 ) THEN  RAISE EXCEPTION 'Nr % already used', 
currenr_nr_fo ;

 ELSE
  RETURN NEW;
 END IF;
 ELSE
  RETURN NEW;
END IF;
END;

Thank you.



Re: [GENERAL] CONCAT returns null

2016-02-28 Thread Sterpu Victor
The problem was from PG Admin that is not displaing cells with a high 
amount of data.

In the application the query is working well.

-- Original Message --
From: "Sterpu Victor" <vic...@caido.ro>
To: "PostgreSQL General" <pgsql-general@postgresql.org>
Sent: 28/2/2016 12:02:47 PM
Subject: [GENERAL] CONCAT returns null


Hello

I have this concat:
CONCAT(f.nrfo, '/', TO_CHAR(fd1.validfrom, '-MM-DD'), f2.nrfo, 
TO_CHAR(fd7.validfrom, '-MM-DD'), 
DATE(fd5.validto)-DATE(fd1.validfrom))

that works fine but when I change to this(I added a ' with '):
ARRAY_AGG(CONCAT(f.nrfo, '/', TO_CHAR(fd1.validfrom, '-MM-DD'), ' 
with ', f2.nrfo, TO_CHAR(fd7.validfrom, '-MM-DD'), 
DATE(fd5.validto)-DATE(fd1.validfrom))) AS temp,

then concat returns NULL.
Why? I tried to add' with '::VARCHARand ' with '::TEXT but the result 
is still NULL.


Thank you

This is part of a bigger query that I wrote down.
SELECT d1.id AS id, d1.name AS name, COUNT(DISTINCT(f2.id)) AS 
nr_spitalizari,
ARRAY_AGG(CONCAT(f.nrfo, '/', TO_CHAR(fd1.validfrom, '-MM-DD'), ' 
cu ', f2.nrfo, '/', TO_CHAR(fd7.validfrom, '-MM-DD'), ' diferenta 
de ', DATE(fd5.validto)-DATE(fd1.validfrom), ' zile ')) AS fise


FROM focg f
JOIN person p ON p.id = f.idpatient
JOIN focgdepartment fd1 ON fd1.idfocg = f.id
JOIN department d1 ON d1.id = fd1.iddepartment
LEFT JOIN focgdepartment fd2 ON fd2.idfocg = f.id AND fd2.validfrom < 
fd1.validfrom

JOIN focgdepartment fd3 ON fd3.idfocg = f.id
JOIN department d2 ON d2.id = fd3.iddepartment
LEFT JOIN focgdepartment fd4 ON fd4.idfocg = f.id AND fd4.validfrom > 
fd3.validfrom


JOIN focg f2 ON f2.idpatient = f.idpatient AND f2.id != f.id
JOIN focgdepartment fd5 ON fd5.idfocg = f2.id
LEFT JOIN focgdepartment fd6 ON fd6.idfocg = f2.id AND fd6.validfrom > 
fd5.validfrom

JOIN focgdepartment fd7 ON fd7.idfocg = f2.id
LEFT JOIN focgdepartment fd8 ON fd8.idfocg = f2.id AND fd8.validfrom < 
fd8.validfrom


WHERE fd2.id IS NULL AND fd4.id IS NULL AND fd6.id IS NULL AND fd8.id 
IS NULL AND fd5.validto IS NOT NULL AND fd1.validfrom >= '2015-02-01' 
AND fd1.validfrom <= '2016-02-29' AND 
DATE(fd5.validto)-DATE(fd1.validfrom)<=30 AND 
DATE(fd5.validto)-DATE(fd1.validfrom)>=0

GROUP BY d1.name, d1.id;

DISCLAIMER:
Acest mesaj de posta electronica si documentele aferente sunt 
confidentiale. Este interzisa distribuirea, dezvaluirea sau orice alt 
mod de utilizare a lor. Daca nu sunteti destinatarul acestui mesaj, 
este interzis sa actionati in baza acestor informatii. Citirea, 
copierea, distribuirea, dezvaluirea sau utilizarea in alt mod a 
informatiei continute in acest mesaj constituie o incalcare a legii. 
Daca ati primit mesajul din greseala, va rugam sa il distrugeti, 
anuntand expeditorul de eroarea comisa. Intrucat nu poate fi garantat 
faptul ca posta electronica este un mod sigur si lipsit de erori de 
transmitere a informatiilor, este responsabilitatea dvs. sa va 
asigurati ca mesajul (inclusiv documentele alaturate lui) este validat 
si autorizat spre a fi utilizat in mediul dvs.





[GENERAL] CONCAT returns null

2016-02-28 Thread Sterpu Victor

Hello

I have this concat:
CONCAT(f.nrfo, '/', TO_CHAR(fd1.validfrom, '-MM-DD'), f2.nrfo, 
TO_CHAR(fd7.validfrom, '-MM-DD'), 
DATE(fd5.validto)-DATE(fd1.validfrom))

that works fine but when I change to this(I added a ' with '):
ARRAY_AGG(CONCAT(f.nrfo, '/', TO_CHAR(fd1.validfrom, '-MM-DD'), ' 
with ', f2.nrfo, TO_CHAR(fd7.validfrom, '-MM-DD'), 
DATE(fd5.validto)-DATE(fd1.validfrom))) AS temp,

then concat returns NULL.
Why? I tried to add' with '::VARCHARand ' with '::TEXT but the result is 
still NULL.


Thank you

This is part of a bigger query that I wrote down.
SELECT d1.id AS id, d1.name AS name, COUNT(DISTINCT(f2.id)) AS 
nr_spitalizari,
ARRAY_AGG(CONCAT(f.nrfo, '/', TO_CHAR(fd1.validfrom, '-MM-DD'), ' cu 
', f2.nrfo, '/', TO_CHAR(fd7.validfrom, '-MM-DD'), ' diferenta de ', 
DATE(fd5.validto)-DATE(fd1.validfrom), ' zile ')) AS fise


FROM focg f
JOIN person p ON p.id = f.idpatient
JOIN focgdepartment fd1 ON fd1.idfocg = f.id
JOIN department d1 ON d1.id = fd1.iddepartment
LEFT JOIN focgdepartment fd2 ON fd2.idfocg = f.id AND fd2.validfrom < 
fd1.validfrom

JOIN focgdepartment fd3 ON fd3.idfocg = f.id
JOIN department d2 ON d2.id = fd3.iddepartment
LEFT JOIN focgdepartment fd4 ON fd4.idfocg = f.id AND fd4.validfrom > 
fd3.validfrom


JOIN focg f2 ON f2.idpatient = f.idpatient AND f2.id != f.id
JOIN focgdepartment fd5 ON fd5.idfocg = f2.id
LEFT JOIN focgdepartment fd6 ON fd6.idfocg = f2.id AND fd6.validfrom > 
fd5.validfrom

JOIN focgdepartment fd7 ON fd7.idfocg = f2.id
LEFT JOIN focgdepartment fd8 ON fd8.idfocg = f2.id AND fd8.validfrom < 
fd8.validfrom


WHERE fd2.id IS NULL AND fd4.id IS NULL AND fd6.id IS NULL AND fd8.id IS 
NULL AND fd5.validto IS NOT NULL AND fd1.validfrom >= '2015-02-01' AND 
fd1.validfrom <= '2016-02-29' AND 
DATE(fd5.validto)-DATE(fd1.validfrom)<=30 AND 
DATE(fd5.validto)-DATE(fd1.validfrom)>=0

GROUP BY d1.name, d1.id;


[GENERAL] Windows performance

2016-02-12 Thread Sterpu Victor

Hello

Why is Postgres so slow on Windows compared to linux?
Can I do something to match the performance?
I have 2 servers:
- one is Windows 8, CPU XEON, 8 CORES,  32G of RAM - my test query runs 
in 17 seconds
- the other is Linux CPU XEON, 4 cores, 4 Gh of RAM - my test query runs 
in 2 seconds

I run the query on the same database.

CPU is not used at max on the servers, RAM is fine.
Is there a problem with Windows? My gues is that Windows is not using 
the hardware resources as it should be.

Can I do something to fix this?

Thank you.


Re: [GENERAL] Windows performance

2016-02-12 Thread Sterpu Victor

There are some differences that I haven't mentioned.
Postgres on Linux is PostgreSQL 9.1.4 64 bit
Postgres on Windows is PostgreSQL 9.5.0, compiled by Visual C++ build 
1800, 64-bit


The query is very big but I pasted it at the end of the mail with the 
EXPLAIN ANALYZE.
I runned the queries many times on every machine, the cache is not 
involved.


postgresql.conf are defaults, I haven't compared them yet but I will.


"Limit  (cost=163011.25..163011.63 rows=1 width=3640) (actual 
time=2811.693..2812.109 rows=20 loops=1)"
"  ->  WindowAgg  (cost=163011.25..163011.63 rows=1 width=3640) (actual 
time=2811.691..2812.103 rows=20 loops=1)"
"->  Group  (cost=163011.25..163011.60 rows=1 width=3640) 
(actual time=2711.668..2711.823 rows=20 loops=1)"
"  ->  Sort  (cost=163011.25..163011.26 rows=1 width=3640) 
(actual time=2711.662..2711.685 rows=21 loops=1)"
"Sort Key: j1031101.validfrom, j1033386.name, 
j1033387.name, j1033359.validto, j1031076.name, j1031074.id, 
j1031074.siui_appid, j1031074.data_adeverinta, j1031074.is_paliativ, 
j1031074.cardno, j1031074.cardno_externare, j1031074.sign_date, 
j1031074.sign_date_externare, j1031074.unsigned_string, 
j1031074.unsigned_string_externare, j1031074.signhash, 
j1031074.signhash_externare, j1031074.signature, 
j1031074.signature_externare, j1031074.send_xml, 
j1031074.send_xml_externare, j1031074.received_xml, 
j1031074.received_xml_externare, j1031074.error, 
j1031074.error_externare, j1031074.validat, j1031074.validat_externare, 
j1031074.online, j1031074.online_externare, 
j1031074.serie_bilet_internare, j1031074.nr_bilet_internare, 
j1031074.idpatient, j1031075.cnp, j1031075.name, j1031075.surname, 
j1031074.nrfo, j1031074.greutate_nastere, j1031078.value, 
j1031074.idensuredstatustype, j1031079.value, j1031074.idensuredstatus, 
j1031080.code, j1031074.id_org_unit, j1031081.code, 
j1031074.id_categorie_asigurat, j1031082.name, 
j1031074.id_focg_tip_internare, j1031083.name, 
j1031074.id_focg_criteriu_internare, j1031084.stencil_no, j1031084.id, 
j1031089.value, j1031074.id_education_level, j1031074.greutate, 
j1031090.nume, j1031074.id_focg_situatii_speciale, j1031091.nume, 
j1031091.id, j1031092.nume, j1031074.id_formulare_europene, 
j1031074.id_cnp_mama, j1031094.cnp, j1031093.nrfo, j1031074.id_focg, 
j1031074.nr_card_euro, j1031074.nr_pasaport, j1031074.nr_card_national, 
j1031088.id, j1031088.name, j1031074.export_drg, j1031074.drgcaseid, 
j1031074.export_ecosoft, j1031074.mesaj_drg, j1031074.uid, 
j1031074.mesaj_ecosoft, j1031074.id_address_domiciliu, 
j1031074.id_address_resedinta, j1031095.id, j1031095.denumire, 
j1031096.id, j1031096.code, j1031097.id, j1031097.name, j1031098.id, 
j1031098.description, j1031099.id, j1031099.name, j1031100.id, 
j1031100.code, j1031074.scrisoare_medicala_parafa, 
j1031074.scrisoare_medicala_contract, 
j1031074.scrisoare_medicala_tip_contract, j1031074.export_siui, 
j1031074.mesaj_siui, j1031087.id, j1031087.stencil_no, 
j1031074.diagnostic_trimitere_text, j1031074.greutate_externare, 
j1031074.data_decesului, j1031736.id, j1031736.descriere, j1031737.id, 
j1031737.descriere, j1033295.id, j1033295.stencil_no, j1033299.id, 
j1033299.description, j1031074.text_ore_ventilatie, 
j1031074.drg_cod_grupa, j1031074.drg_relative_value, 
j1031074.data_2500g, j1031074.prematur_gr_i, j1033304.id, 
j1033304.description, j1033358.id, j1033358.name, 
j1031074.reinternat_transfer, j1031074.aviz_comisie, 
j1031074.criteriu_urgenta1, j1031074.criteriu_urgenta2, 
j1031074.criteriu_urgenta3, j1031074.criteriu_urgenta4, 
j1031074.criteriu_urgenta5, j1031074.criteriu_urgenta6, 
j1031074.criteriu_urgenta7, j1031074.criteriu_urgenta8, 
j1031074.criteriu_urgenta9a, j1031074.criteriu_urgenta9b, 
j1031074.criteriu_urgenta10, j1031074.criteriu_urgenta11, 
j1031074.criteriu_urgenta12, j1031074.criteriu_urgenta13"

"Sort Method: external merge  Disk: 3192kB"
"->  Nested Loop  (cost=87.71..163011.24 rows=1 
width=3640) (actual time=204.650..2579.588 rows=3075 loops=1)"
"  ->  Nested Loop Left Join  
(cost=87.71..163002.94 rows=1 width=3640) (actual time=204.646..2573.701 
rows=2674 loops=1)"

"Filter: (j1033360.id IS NULL)"
"->  Nested Loop Left Join  
(cost=87.71..162994.64 rows=1 width=3648) (actual time=204.642..2568.543 
rows=2736 loops=1)"
"  ->  Nested Loop Left Join  
(cost=87.71..162994.36 rows=1 width=3631) (actual time=204.636..2563.826 
rows=2736 loops=1)"
"Filter: 
(((date(j1031101.validfrom) >= '2016-02-01'::date) AND 
(date(j1031101.validfrom) <= '2016-02-29'::date)) OR ((j1033359.validto 
IS NOT NULL) AND (date(j1033359.validto) >= '2016-02-01'::date) AND 
(date(j1033359.validto) <= '2016-02-29'::date)) OR ((j1033359.validto IS 
NULL) AND (date(j1031101.validfrom) <= '2016-02-01'::date)))"
" 

Re: [GENERAL] Unique index problem

2015-12-21 Thread Sterpu Victor

Seems to be working also with a single index, like this:
CREATE UNIQUE INDEX lab_tests_groups_siui_uni_c ON 
lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from, coalesce(id_lab_sample_types, 0));


-- Original Message --
From: "Pavel Stehule" <pavel.steh...@gmail.com>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "Marc Mamin" <m.ma...@intershop.de>; "PostgreSQL General" 
<pgsql-general@postgresql.org>; "Andreas Kretschmer" 
<akretsch...@spamfence.net>; "Scott Marlowe" <scott.marl...@gmail.com>

Sent: 12/21/2015 8:44:14 AM
Subject: Re: [GENERAL] Unique index problem




2015-12-21 7:39 GMT+01:00 Sterpu Victor <vic...@caido.ro>:

Thank you.

I used the syntax with 2 indexes, it works for me.
But why does NULL != NULL?



because it was designed

http://www.w3schools.com/sql/sql_null_values.asp

Pavel



-- Original Message --
From: "Marc Mamin" <m.ma...@intershop.de>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "PostgreSQL General" <pgsql-general@postgresql.org>; "Andreas 
Kretschmer" <akretsch...@spamfence.net>; "Scott Marlowe" 
<scott.marl...@gmail.com>

Sent: 12/20/2015 11:44:35 PM
Subject: AW: [GENERAL] Unique index problem



____
 pgsql-general-ow...@postgresql.org 
[pgsql-general-ow...@postgresql.org] im Auftrag von Scott 
Marlowe [scott.marl...@gmail.com]

ndet: Sonntag, 20. Dezember 2015 17:02
Sterpu Victor
PostgreSQL General
eff: Re: [GENERAL] Unique index problem

un, Dec 20, 2015 at 9:00 AM, Scott Marlowe <scott.marl...@gmail.com> 
wrote:

 Sun, Dec 20, 2015 at 8:50 AM, Sterpu Victor <vic...@caido.ro> wrote:
ello

 I created a unique index that doesn't seem to work when one 
column is NULL.
 Index is created like this: CREATE UNIQUE INDEX 
lab_tests_groups_siui_uni ON
 lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from,

 id_lab_sample_types);
 Now I can run this insert twice and I will have 2 records in the 
database

 that seem to violate this index:
 INSERT INTO lab_tests_groups_siui(id_lab_tests_siui, 
id_lab_tests_groups,

 valid_from) VALUES(463, 9183, '2014-06-01');

 When I create the index like this "CREATE UNIQUE INDEX
 lab_tests_groups_siui_uni ON 
lab_tests_groups_siui(id_lab_tests_siui,

 id_lab_tests_groups, valid_from);" index works fine.

 I tested this on postgres 9.1.4 and 9.1.9.


 This is normal operation, as one NULL is unique from other NULLS, 
as
 far as the db is concerned. If you want it to work some other way, 
you
 need to use a value other than null, or make an index that's 
something

 like un



Hello,

 CREATE UNIQUE INDEX lab_tests_groups_siui_uni ON 
lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from, id_lab_sample_types);


assuming that only id_lab_sample_types can be null, you could cover 
this with 2 partial indexes:


CREATE UNIQUE INDEX lab_tests_groups_siui_uni_a ON 
lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from) WHERE (id_lab_sample_types IS NULL);

and
CREATE UNIQUE INDEX lab_tests_groups_siui_uni_b ON 
lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from, id_lab_sample_types) WHERE (id_lab_sample_types IS NOT 
NULL);


There is a serious caveat though: queries that don't contains a 
"id_lab_sample_types IS [NOT] NULL" condition will ignore the index.


Maybe there is also a way using DISTINCT(id_lab_tests_siui, 
id_lab_tests_groups, valid_from, id_lab_sample_types) in the index 
definition, but I've never tried that and suspect the planner will 
also have trouble to include such an index in the plan.


regards,

Marc Mamin













--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




Re: [GENERAL] Unique index problem

2015-12-20 Thread Sterpu Victor

Thank you.

I used the syntax with 2 indexes, it works for me.
But why does NULL != NULL?


-- Original Message --
From: "Marc Mamin" <m.ma...@intershop.de>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "PostgreSQL General" <pgsql-general@postgresql.org>; "Andreas 
Kretschmer" <akretsch...@spamfence.net>; "Scott Marlowe" 
<scott.marl...@gmail.com>

Sent: 12/20/2015 11:44:35 PM
Subject: AW: [GENERAL] Unique index problem




 pgsql-general-ow...@postgresql.org 
[pgsql-general-ow...@postgresql.org] im Auftrag von Scott 
Marlowe [scott.marl...@gmail.com]

ndet: Sonntag, 20. Dezember 2015 17:02
Sterpu Victor
PostgreSQL General
eff: Re: [GENERAL] Unique index problem

un, Dec 20, 2015 at 9:00 AM, Scott Marlowe <scott.marl...@gmail.com> 
wrote:

 Sun, Dec 20, 2015 at 8:50 AM, Sterpu Victor <vic...@caido.ro> wrote:
ello

 I created a unique index that doesn't seem to work when one column 
is NULL.
 Index is created like this: CREATE UNIQUE INDEX 
lab_tests_groups_siui_uni ON
 lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from,

 id_lab_sample_types);
 Now I can run this insert twice and I will have 2 records in the 
database

 that seem to violate this index:
 INSERT INTO lab_tests_groups_siui(id_lab_tests_siui, 
id_lab_tests_groups,

 valid_from) VALUES(463, 9183, '2014-06-01');

 When I create the index like this "CREATE UNIQUE INDEX
 lab_tests_groups_siui_uni ON 
lab_tests_groups_siui(id_lab_tests_siui,

 id_lab_tests_groups, valid_from);" index works fine.

 I tested this on postgres 9.1.4 and 9.1.9.


 This is normal operation, as one NULL is unique from other NULLS, as
 far as the db is concerned. If you want it to work some other way, 
you
 need to use a value other than null, or make an index that's 
something

 like un



Hello,

 CREATE UNIQUE INDEX lab_tests_groups_siui_uni ON 
lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from, id_lab_sample_types);


assuming that only id_lab_sample_types can be null, you could cover 
this with 2 partial indexes:


CREATE UNIQUE INDEX lab_tests_groups_siui_uni_a ON 
lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from) WHERE (id_lab_sample_types IS NULL);

and
CREATE UNIQUE INDEX lab_tests_groups_siui_uni_b ON 
lab_tests_groups_siui(id_lab_tests_siui, id_lab_tests_groups, 
valid_from, id_lab_sample_types) WHERE (id_lab_sample_types IS NOT 
NULL);


There is a serious caveat though: queries that don't contains a 
"id_lab_sample_types IS [NOT] NULL" condition will ignore the index.


Maybe there is also a way using DISTINCT(id_lab_tests_siui, 
id_lab_tests_groups, valid_from, id_lab_sample_types) in the index 
definition, but I've never tried that and suspect the planner will also 
have trouble to include such an index in the plan.


regards,

Marc Mamin













--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unique index problem

2015-12-20 Thread Sterpu Victor

Hello

I created a unique index that doesn't seem to work when one column is 
NULL.
Index is created like this: CREATE UNIQUE INDEX 
lab_tests_groups_siui_uni ON lab_tests_groups_siui(id_lab_tests_siui, 
id_lab_tests_groups, valid_from, id_lab_sample_types);
Now I can run this insert twice and I will have 2 records in the 
database that seem to violate this index:
INSERT INTO lab_tests_groups_siui(id_lab_tests_siui, 
id_lab_tests_groups, valid_from) VALUES(463, 9183, '2014-06-01');


When I create the index like this "CREATE UNIQUE INDEX 
lab_tests_groups_siui_uni ON lab_tests_groups_siui(id_lab_tests_siui, 
id_lab_tests_groups, valid_from);" index works fine.


I tested this on postgres 9.1.4 and 9.1.9.

Thank you.


Re: [GENERAL] DISTINCT in STRING_AGG

2015-11-30 Thread Sterpu Victor

Thank you.

I think there is no native function that will solve this problem, 
intarray extension ​can't order as in the example query.
I could write a new SQL function but I solved the problem with 
subqueries and works fine.


I need order because I use nested trees.
To_left and to_right are the coordinates for each node and the result 
must be ordered by this to obtain the correct result.





-- Original Message --
From: "Geoff Winkless" <pgsqlad...@geoff.dj>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "PostgreSQL General" <pgsql-general@postgresql.org>
Sent: 11/29/2015 10:51:32 PM
Subject: Re: Re[2]: [GENERAL] DISTINCT in STRING_AGG


On 29 November 2015 at 18:59, Sterpu Victor <vic...@caido.ro> wrote:

I can't skip the ordering.
I'm sure aqjs3 is the one that produces the duplication.
I guess subqueries are the only option, like this:


​Well you could look at the intarray ​extension and a combination of 
array_agg, uniq() and string_to_array:


http://www.postgresql.org/docs/current/static/intarray.html

but that's probably sledgehammer:nut time.

Not sure why you need to order the values you're getting back by 
something other than the values themselves - is there a reason you 
wouldn't want the "children"​ set to be ordered numerically? You can 
still order the outer query by whatever you like, or you can order the 
aggregate by the values themselves, it's just the DISTINCT inside the 
aggregate query requires that an internal ORDER includes the ordering 
term in the result.


​Geoff​



Re: [GENERAL] DISTINCT in STRING_AGG

2015-11-29 Thread Sterpu Victor

I can't skip the ordering.
I'm sure aqjs3 is the one that produces the duplication.
I guess subqueries are the only option, like this:

SELECT atjs.id,  tmp.children AS children
FROM administration.ad_query_join_select atjs

JOIN (SELECT
 atjs.id, STRING_AGG(CAST(aqjs1.id AS VARCHAR), ',' ORDER BY 
aqjs1.to_left) AS children

 FROM administration.ad_query_join_select atjs
 LEFT JOIN administration.ad_query_join_select aqjs1 ON 
(aqjs1.id_ad_query = atjs.id_ad_query AND aqjs1.to_left>atjs.to_left 
AND aqjs1.to_right<atjs.to_right)
 LEFT JOIN administration.ad_query_join_select aqjs2 ON 
(aqjs2.id_ad_query = atjs.id_ad_query AND aqjs2.to_left>atjs.to_left 
AND aqjs2.to_rightaqjs2.to_right>aqjs1.to_right)

 WHERE aqjs2.id IS NULL AND atjs.id_ad_query = 475543
 GROUP BY atjs.id) tmp ON (tmp.id = atjs.id)

JOIN administration.ad_query aq ON (aq.id=atjs.id_ad_query)
LEFT JOIN administration.ad_query_join_select aqjs3 ON 
(aqjs3.id_ad_query = atjs.id_ad_query AND aqjs3.to_leftAND aqjs3.to_right>atjs.to_right)

WHERE atjs.id_ad_query = 475543
GROUP BY aq.id, atjs.id, tmp.children
ORDER BY aq.id ASC, atjs.to_left ASC;



Result is:
id  ; children
1399029;"1399031"
1399031;"1399032,1399033"
Is there a better way? I usualy try to avoid subqueries.

-- Original Message --
From: "Geoff Winkless" <pgsqlad...@geoff.dj>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "PostgreSQL General" <pgsql-general@postgresql.org>
Sent: 11/29/2015 6:42:18 PM
Subject: Re: [GENERAL] DISTINCT in STRING_AGG


On 28 November 2015 at 18:35, Sterpu Victor <vic...@caido.ro> wrote:

Can I make a distinct STRING_AGG?
This is my query :
SELECT atjs.id, STRING_AGG(CAST(aqjs1.id AS VARCHAR), ',' ORDER BY 
aqjs1.to_left) AS children

FROM administration.ad_query_join_select atjs
JOIN administration.ad_query aq ON (aq.id=atjs.id_ad_query)
LEFT JOIN administration.ad_query_join_select aqjs1 ON 
(aqjs1.id_ad_query = atjs.id_ad_query AND aqjs1.to_left>atjs.to_left 
AND aqjs1.to_right<atjs.to_right)
LEFT JOIN administration.ad_query_join_select aqjs2 ON 
(aqjs2.id_ad_query = atjs.id_ad_query AND aqjs2.to_left>atjs.to_left 
AND aqjs2.to_rightaqjs2.to_right>aqjs1.to_right)
LEFT JOIN administration.ad_query_join_select aqjs3 ON 
(aqjs3.id_ad_query = atjs.id_ad_query AND aqjs3.to_leftAND aqjs3.to_right>atjs.to_right)

WHERE aqjs2.id IS NULL AND atjs.id_ad_query = 475543
GROUP BY aq.id, atjs.id
ORDER BY aq.id ASC, atjs.to_left ASC;

And "childen" contain doubles. The result is:
id  ; children
1399029;"1399031,1399031"
1399031;"1399032,1399032,1399032,1399033,1399033,1399033"

There are doubles because of the join aqjs3 witch is producing this 
problem.

Can I make it so the children ID's are unique?



Well if you can live with losing the to_left ordering, then you could 
just do


SELECT STRING_AGG(DISTINCT CAST(aqjs1.id​ AS VARCHAR), '') AS children
​
​ ...​

​no?

Geoff​



[GENERAL] DISTINCT in STRING_AGG

2015-11-28 Thread Sterpu Victor

Hello

Can I make a distinct STRING_AGG?
This is my query :
SELECT atjs.id, STRING_AGG(CAST(aqjs1.id AS VARCHAR), ',' ORDER BY 
aqjs1.to_left) AS children

FROM administration.ad_query_join_select atjs
JOIN administration.ad_query aq ON (aq.id=atjs.id_ad_query)
LEFT JOIN administration.ad_query_join_select aqjs1 ON 
(aqjs1.id_ad_query = atjs.id_ad_query AND aqjs1.to_left>atjs.to_left AND 
aqjs1.to_rightatjs.to_left AND 
aqjs2.to_rightaqjs2.to_right>aqjs1.to_right)
LEFT JOIN administration.ad_query_join_select aqjs3 ON 
(aqjs3.id_ad_query = atjs.id_ad_query AND aqjs3.to_leftaqjs3.to_right>atjs.to_right)

WHERE aqjs2.id IS NULL AND atjs.id_ad_query = 475543
GROUP BY aq.id, atjs.id
ORDER BY aq.id ASC, atjs.to_left ASC;

And "childen" contain doubles. The result is:
id  ; children
1399029;"1399031,1399031"
1399031;"1399032,1399032,1399032,1399033,1399033,1399033"

There are doubles because of the join aqjs3 witch is producing this 
problem.

Can I make it so the children ID's are unique?

Thank you.


[GENERAL] ARRAY_AGG and ORDER

2015-11-26 Thread Sterpu Victor

Hello

I need to order an array using another column in table 
ad_query_join_select.

I need something like this but this is not a valid SQL:
SELECT array_to_string(array_agg(aqjs.id ORDER BY aqjs.to_left), ',') AS 
str, aq.name

FROM ad_query aq
JOIN ad_query_join_select aqjs ON (aqjs.id_ad_query = aq.id)
GROUP BY aq.id, aq.name;

Is there some equivalent syntax in Postgres?

Thank you.


Re: [GENERAL] ARRAY_AGG and ORDER

2015-11-26 Thread Sterpu Victor

Yes, thank you. :)

-- Original Message --
From: "Sterpu Victor" <vic...@caido.ro>
To: "Geoff Winkless" <pgsqlad...@geoff.dj>
Sent: 11/26/2015 2:51:48 PM
Subject: Re[2]: [GENERAL] ARRAY_AGG and ORDER


Yes, thank you. :)


-- Original Message --
From: "Geoff Winkless" <pgsqlad...@geoff.dj>
To: "Sterpu Victor" <vic...@caido.ro>
Cc: "PostgreSQL General" <pgsql-general@postgresql.org>
Sent: 11/26/2015 2:46:33 PM
Subject: Re: [GENERAL] ARRAY_AGG and ORDER


On 26 November 2015 at 12:43, Sterpu Victor <vic...@caido.ro> wrote:

Hello

I need to order an array using another column in table 
ad_query_join_select.

I need something like this but this is not a valid SQL:
SELECT array_to_string(array_agg(aqjs.idORDER BY aqjs.to_left), ',') 
AS str, aq.name

FROM ad_query aq
JOIN ad_query_join_select aqjs ON (aqjs.id_ad_query = aq.id)
GROUP BY aq.id, aq.name;

Is there some equivalent syntax in Postgres?


​As per the manual?

​http://www.postgresql.org/docs/current/static/sql-expressions.html#SYNTAX-AGGREGATES

​SELECT string_agg(a, ',' ORDER BY a) FROM table;


Re: [GENERAL] Subselect with no records results in final empty set

2015-01-30 Thread Sterpu Victor

I always have a single row in these selects so the result will have only
a row.
I must make a single select for a framework that takes as parameter a
single select to complete a XML template in a particular situation.

-- Original Message --
From: John R Pierce pie...@hogranch.com
To: pgsql-general@postgresql.org
Sent: 1/29/2015 10:52:25 PM
Subject: Re: [GENERAL] Subselect with no records results in final empty
set


On 1/29/2015 12:36 PM, Sterpu Victor wrote:


ON(null) never matched.


NULL is neither true nor false.

ON somefieldinthejoin IS NULL would be a valid syntax. except, that's
NOT a join condition, a join condition would be ON left_table.something
= right_table.something


ON (1=1)


equivalent to ON TRUE

but that will cross join everything, so if the left table has N rows
and the right table has M rows, you'll end up with N*M rows in the
result. is that really what you want ??




-- john r pierce 37N 122W
somewhere on the middle of the left coast



-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can I unite 2 selects?

2015-01-29 Thread Sterpu Victor

Hello

Can I write a query where I receive a single result set from many
queries?
Something like this: SELECT (SELECT 1 AS t1, 2 AS t2), (SELECT 3 AS t3)
I tried exactly this but the error is: ERROR:  subquery must return
only one column
But I don't see why it must have only one column.

Thank you

---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



[GENERAL] Subselect with no records results in final empty set

2015-01-29 Thread Sterpu Victor

Hello

I have this select where the last subselect will return a empty set and
because of this the whole select will be empty.
How can I change this syntax so I will have a row result even if the
last select is empty?

SELECT * FROM (SELECT 1 AS t1, 2 AS t2) as t, (SELECT 3 AS t3) as s,
(SELECT * FROM atc WHERE id = '1231222' LIMIT 1 OFFSET 0) AS s3;

Thank you.

---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



Re: [GENERAL] Can I unite 2 selects?

2015-01-29 Thread Sterpu Victor

Thank you.
This is the syntax I was looking for.

-- Original Message --
From: Adrian Klaver adrian.kla...@aklaver.com
To: Sterpu Victor vic...@caido.ro; PostgreSQL General
pgsql-general@postgresql.org
Sent: 1/29/2015 9:09:31 PM
Subject: Re: [GENERAL] Can I unite 2 selects?

On 01/29/2015 10:58 AM, Sterpu Victor wrote:
Hello
Can I write a query where I receive a single result set from many
queries?
Something like this: SELECT (SELECT 1 AS t1, 2 AS t2), (SELECT 3 AS t3)
I tried exactly this but the error is: ERROR: subquery must return
only one column
But I don't see why it must have only one column.

postgres@test=# SELECT * from (SELECT 1 AS t1, 2 AS t2) as t, (SELECT 3
AS t3) as s;
t1 | t2 | t3
++
1 | 2 | 3


Thank you



-- Adrian Klaver
adrian.kla...@aklaver.com


---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Subselect with no records results in final empty set

2015-01-29 Thread Sterpu Victor

It works as you sugested, this is the syntax I used:
SELECT * FROM
(SELECT 1 AS t1, 2 AS t2) AS t1
LEFT JOIN (SELECT * FROM atc WHERE id = '1231222') AS t2 ON (null)

Thank you.

-- Original Message --
From: David G Johnston david.g.johns...@gmail.com
To: pgsql-general@postgresql.org
Sent: 1/29/2015 10:03:38 PM
Subject: Re: [GENERAL] Subselect with no records results in final empty
set


Sterpu Victor wrote

 Hello

 I have this select where the last subselect will return a empty set
and
 because of this the whole select will be empty.
 How can I change this syntax so I will have a row result even if the
 last select is empty?

 SELECT * FROM (SELECT 1 AS t1, 2 AS t2) as t, (SELECT 3 AS t3) as s,
 (SELECT * FROM atc WHERE id = '1231222' LIMIT 1 OFFSET 0) AS s3;


Use explicit join syntax; and then pick the LEFT [OUTER] JOIN
variant.

SELECT
FROM t
LEFT JOIN s3 ON (TRUE)

David J.



--
View this message in context:
http://postgresql.nabble.com/Subselect-with-no-records-results-in-final-empty-set-tp5836011p5836014.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Subselect with no records results in final empty set

2015-01-29 Thread Sterpu Victor

I changed the final query to
SELECT * FROM
(SELECT 1 AS t1, 2 AS t2) AS t1
LEFT JOIN (SELECT * FROM atc WHERE id = '1231222') AS t2 ON (1=1)


ON(null) never matched.

-- Original Message --
From: Sterpu Victor vic...@caido.ro
To: David G Johnston david.g.johns...@gmail.com;
pgsql-general@postgresql.org
Sent: 1/29/2015 10:22:28 PM
Subject: Re: [GENERAL] Subselect with no records results in final empty
set


It works as you sugested, this is the syntax I used:
SELECT * FROM
(SELECT 1 AS t1, 2 AS t2) AS t1
LEFT JOIN (SELECT * FROM atc WHERE id = '1231222') AS t2 ON (null)

Thank you.

-- Original Message --
From: David G Johnston david.g.johns...@gmail.com
To: pgsql-general@postgresql.org
Sent: 1/29/2015 10:03:38 PM
Subject: Re: [GENERAL] Subselect with no records results in final empty
set


Sterpu Victor wrote

 Hello

 I have this select where the last subselect will return a empty set
and
 because of this the whole select will be empty.
 How can I change this syntax so I will have a row result even if the
 last select is empty?

 SELECT * FROM (SELECT 1 AS t1, 2 AS t2) as t, (SELECT 3 AS t3) as s,
 (SELECT * FROM atc WHERE id = '1231222' LIMIT 1 OFFSET 0) AS s3;


Use explicit join syntax; and then pick the LEFT [OUTER] JOIN
variant.

SELECT
FROM t
LEFT JOIN s3 ON (TRUE)

David J.



--
View this message in context:
http://postgresql.nabble.com/Subselect-with-no-records-results-in-final-empty-set-tp5836011p5836014.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql 9.4 session vars

2015-01-24 Thread Sterpu Victor

Hello

In previous versions I was able to define session vars in
postgresql.conf with the option custom_variable_classes but this
option is no longer available.
Is there a replacement for this in postgres 9.4?

Thank you


---
This email has been checked for viruses by Avast antivirus software.
http://www.avast.com



[GENERAL] FATAL: invalid frontend message type 47 server closed the connection unexpectedly

2007-01-17 Thread Sterpu Victor

I receive this message when I do a specific operation.
How can I debug something like this?

I use a Centos 4.0 distribution, postgresql version 8.1.3(I also used 
diffrent distibutions and other versions of postgresql).

I connect to postgresql using perl DBD::Pg, using a tcp socket.

The error appears when I do a specific operation, but if I try many 
times, it will work.

It works only if I try many times and fast.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] FATAL: invalid frontend message type 47

2006-01-24 Thread Sterpu Victor

After a commit I receive this message:
FATAL:  invalid frontend message type 47, and rollback.



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] date/time function

2005-12-01 Thread Sterpu Victor
I need a function that will transform from the format '2002-09-08 
05:29:41' to seconds since 1970.


Thank you.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] sequence problem - many rows

2005-11-29 Thread Sterpu Victor
I shoul start the sequence with the last number in the higher number in 
the table +1.

It works.

Sterpu Victor wrote:


I have the following table
 Column   |  Type   |Modifiers
+---+- 

id | integer | not null default 
nextval('cachedgroupmembers_id_seq'::regclass)

groupid| integer |
memberid | integer |
via| integer |
immediatepar| integer |
disabled   | integer | not null default 0
Indexes:
   cachedgroupmembers_pkey PRIMARY KEY, btree (id)
   cachedgroupmembers2 btree (memberid)
   cachedgroupmembers3 btree (groupid)
   disgroumem btree (groupid, memberid, disabled)

And the sequence for the id column:
CREATE SEQUENCE cachedgroupmembers_id_seq CACHE 1000;

Now the table is empty and I try to insert a row in this table:
INSERT INTO cachedgroupmembers(groupid) values(55);
And it works.

Next I populate the table wittt aprox 700.000 records.
And I do again  INSERT INTO cachedgroupmembers(groupid) values(55);.
The error is : ERROR:  duplicate key violates unique constraint 
cachedgroupmembers_pkey


Thank you.






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] sequence problem - many rows

2005-11-29 Thread Sterpu Victor

I have the following table
 Column   |  Type   |Modifiers
+---+-
id | integer | not null default 
nextval('cachedgroupmembers_id_seq'::regclass)

groupid| integer |
memberid | integer |
via| integer |
immediatepar| integer |
disabled   | integer | not null default 0
Indexes:
   cachedgroupmembers_pkey PRIMARY KEY, btree (id)
   cachedgroupmembers2 btree (memberid)
   cachedgroupmembers3 btree (groupid)
   disgroumem btree (groupid, memberid, disabled)

And the sequence for the id column:
CREATE SEQUENCE cachedgroupmembers_id_seq CACHE 1000;

Now the table is empty and I try to insert a row in this table:
INSERT INTO cachedgroupmembers(groupid) values(55);
And it works.

Next I populate the table wittt aprox 700.000 records.
And I do again  INSERT INTO cachedgroupmembers(groupid) values(55);.
The error is : ERROR:  duplicate key violates unique constraint 
cachedgroupmembers_pkey


Thank you.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] createdb problem

2005-09-24 Thread Sterpu Victor

I use postgresql 7.4.8.
I need te delete and recreate a database very ofent, because I do some 
tests in it.

After a few months the command createdb becamed very slow.

How can I make it go fast again?
Thank you.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] concatenate text

2005-09-04 Thread Sterpu Victor

Can I do something like this?
SELECT sum(name) FROM table;

Where name is a text field.
I know 'sum' doesn't work, but is there another solution?
'||' is not good because it will take just 2 arguments.

Thank you.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org