Re: [GENERAL] pg on Debian servers

2017-11-12 Thread rob stone


On Sat, 2017-11-11 at 14:30 +0100, Magnus Hagander wrote:
> 
> 
> 
> The init.d script is not used with systemd.
> 
>  
> 
Hello Magnus,

Many months ago on a bog standard Debian set-up did a re-boot and ended
up with postmasters running for 9.2, 9.4, 9.5 and 9.6 all started one
after the other. There was a script in init.d which read thru
/usr/lib/postgresql and it started running Postgres for each version it
found. Fortunately, all listening on different ports.

The fix was to disable that script as well as the systemd service.

Doing the upgrade to 10 in a few weeks. Will let you know how it goes.

I assume you are aware of this DSA:-


Debian Security Advisory DSA-4029-1

---

Package: postgresql-common
CVE ID : CVE-2017-8806

It was discovered that the pg_ctlcluster, pg_createcluster and
pg_upgradecluster commands handled symbolic links insecurely which
could result in local denial of service by overwriting arbitrary files.

For the oldstable distribution (jessie), this problem has been fixed
in version 165+deb8u3.

For the stable distribution (stretch), this problem has been fixed in
version 181+deb9u1.


Cheers,
Rob




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg on Debian servers

2017-11-11 Thread rob stone


On Sat, 2017-11-11 at 13:03 +, Mark Morgan Lloyd wrote:
> Apologies for something which is distro related, but I was bitten by
> a 
> "silly mistake"- one of my own, I hasten to say- earlier.
> 
> Several legacy programs written in Delphi ground to a halt this
> morning, 
> which turned out to be because a Debian system had updated its copy
> of 
> PostgreSQL and restarted the server, which broke any live
> connections.
> 
> At least some versions of Delphi, not to mention other IDE/RAD tools 
> with database-aware components, don't automatically try to
> reestablish a 
> database session that's been interrupted. In any event, an
> unexpected 
> server restart (irrespective of all investment in UPSes etc.) has
> the 
> potential of playing havoc on a clustered system.
> 
> Is there any way that either the package maintainer or a site 
> administrator/programmer such as myself can mark the Postgres server 
> packages as "manual upgrade only" or similar? Or since I'm almost 
> certainly not the first person to be bitten by this, is there a 
> preferred hack in mitigation?
> 
> -- 
> Mark Morgan Lloyd
> markMLl .AT. telemetry.co .DOT. uk
> 
> [Opinions above are the author's, not those of his employers or
> colleagues]
> 
> 

Hello Mark,

Probably caused by systemd. You can disable the postgresql service and
re-name the script in init.d. You then have to start postgres via a
shell script.
You can also mark packages to be on "hold" but I don't know exactly
what happens for major version upgrades as the current version is 9 but
when you run an upgrade via apt it will try to install version 10 which
is no big deal as the binaries will end up in different paths, however
libpq will be updated and that may cause a restart. I run upgrades
without any applications running so I don't know exactly what could
happen when using unattended upgrades.

HTH.
Cheers,
Rob 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding 'serial' to existing column

2017-11-03 Thread rob stone


On Fri, 2017-11-03 at 16:47 -0400, Tom Lane wrote:
> Robert Lakes  writes:
> > I am new to Postgres and I am trying to build this SQL statement in
> > my SQL
> > script:
> > ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
> > nextval('tab_id_seq');
> 
> That looks correct, but this not so much:
> 
> > I am trying to build the above-stated command as a dynamic SQL
> > statement:
> >  EXECUTE 'ALTER TABLE listings_cdc ALTER COLUMN table_id SET
> > DEFAULT
> > nextval('||quote_ident(tab_id_seq)||')';
> > I've received an error when running the script this above-stated
> > command:
> >  ERROR:  column "tab_id_seq" does not exist
> > LINE 1: ...OLUMN table_id SET DEFAULT
> > nextval('||quote_ident(tab_id_seq...
> 
> You want quote_literal, not quote_ident, because you're trying to
> produce
> a single-quoted literal.
> 
>   regards, tom lane
> 
> 

You could also do:-

ALTER SEQUENCE tab_id_seq OWNED BY listings_cdc.table_id;

which would establish the one-to-one relationship between the table and
its sequence for that column.

My 2 cents worth.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.6 fails to start on VMWare

2017-10-22 Thread rob stone


On Sun, 2017-10-22 at 15:13 +0100, Martin Moore wrote:
> 2017-10-22 14:08:28 UTC [2479-1] LOG:  0: database system
> shutdown was interrupted; last known up at 2017-10-22 14:07:20 UTC

There is something missing here. Last shutdown at 2017-10-22 14:07:20
UTC on which server?
Then attempting to start it at 2017-10-22 14:08:28 UTC? One minute and
eight seconds later.
It might also help if you explained exactly how you moved the database
from Google Compute to this VM machine.

Cheers,
robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with the time in data type timestamp without time zone

2017-10-19 Thread rob stone


On Thu, 2017-10-19 at 10:28 -0400, américo bravo astroña wrote:
> We are using two different programs within the same computer, a
> program saves the data with date and time in the DB and we are doing
> another program to retrieve that information with date and time to
> generate reports, all this within the same computer.
> 
> Best regards.
> 
> 2017-10-18 11:33 GMT-04:00 David G. Johnston  com>:
> > On Wed, Oct 18, 2017 at 8:21 AM, américo bravo astroña  > o...@gmail.com> wrote:
> > > Hi,
> > > 
> > > I have a program that saves information in a DB Postgresql need
> > > to extract data from date and time of that DB but when I retrieve
> > > the date and time information is always ahead 3 hours, the type
> > > of data that has that field is timestamp without time zone,
> > > 
> > > Please forgive my english I'm using translator.
> > > 
> > 
> > A minimal SQL example of your problem would help.
> > 
> > David J.
> 
> 


Please do not top post on this list.

Your times are in UTC. You'll need to use the "AT TIME ZONE" construct
to display them  in local time.

HTH,

Robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-16 Thread rob stone


On Sat, 2017-09-16 at 19:18 +0200, Rafal Pietrak wrote:
> Dear robjsarg...@gmail.com,
> 
> 
> I do have 17 "process tables" ... they are "class-B" tables, they
> DONT
> need any hierarchy. One of them contain payment details and has FK do
> a
> document (in one of the 12 tables of "class-A", which are in 6 levels
> of
> hierachy) which this payment covers. They get multiplicated ONLY
> because
> PK in those 12 "class-A" tables must be accessed separately. And
> those I
> have. It goes like this:
> 
> 
Hello Rafal,


I've been trying to follow this discussion but now I'm totally
confused. (Some people might say that this is my normal state.)

However, what do you mean by the following:-

17 "process tables"?

multiplicated -- does this mean replicated?

any of the 12 leaf tables I'm using  -- what is a "leaf" table?

collapse some of the hierarchy at the expense of some columns getting
NULL for certain rows  --  does this mean if you have two input fields
(field A and field B) that if field A is not null and field B is null
the data is inserted into one table and if it's the inverse you insert
into an entirely different table?



IMHO, you need an UML diagram that not only sets out your workflow but
will also provide the basis for your schema.
Keep in mind that a hierarchy can be 'n' tables deep. The foreign keys
point back upwards until you finally reach the parent.

You mention payments being made. Users make mistakes. They can post a
payment to the wrong account and later it has to be reversed. These
things can be modelled via your UML diagram.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add new Collation language

2017-09-14 Thread rob stone


On Thu, 2017-09-14 at 11:30 +, Rob Northcott wrote:
> How can I add a collation language to a Postgres server?
> Specifically, I want to create a new database with collation of
> English_United Kingdom.1252 but the only options are C, Posix and
> United States.
> Even if I select United Kingdom as the locale when installing
> Postgres I still only have United States.
>  
> Am I missing something or is there a way to add the United Kingdom
> collation?
>  
> This is Postgres 9.6 on Windows server 2012.  The server is Swedish,
> and my user login is set to UK.


Hello,

initdb --locale=en_GB.UTF-8 . . . plus other init options as required.


HTH,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_ident mapping Kerberos Usernames

2017-09-11 Thread rob stone


> > 
> 
> Hi Rob,
> 
> How would that work? I was under the impression the first column was
> for socket type and limited to 
> local, host, hostssl, and hostnossl?
> 
> Thunderbird's config has been fixed, so here is the line from
> pg_hba.conf line without the 
> formatting issues:
> 
> host all all 192.168.1.0/24 gss include_realm=1 map=testnet
> krb_realm=A.DOMAIN.TLD
> 
> 
> Thanks,
> Ryan


Hello Ryan,

I'm probably incorrect about this as I don't use pg_ident but my
understanding is that each line in pg_ident consists of three fields
being:-

mask-name external-credentials internal-credentials

so that the external log-on is converted to its Postgres log-on and
then the mask-name is used to find a line in pg_hba.conf to verify that
the external-credentials were submitted from an allowable IP address.

Maybe somebody more knowledgeable than myself could provide a better
example.


Cheers,
Rob




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_ident mapping Kerberos Usernames

2017-09-09 Thread rob stone


On Sat, 2017-09-09 at 20:44 -0500, techmail+pg...@dangertoaster.com
wrote:
> Hi,
> 
> I'm trying to get pg_ident to map "user1" and "us...@a.domain.tld"
> to 
> "user1" in postgres, or vice versa. I'm not picky about which way
> works.
> 
> Kerberos authentication works. I've gotten "user1" to login
> successfully 
> with a Kerberos ticket, but I'm not able to get "us...@a.domain.tld"
> to 
> match.
> 
> Environment:
> * PostgreSQL 9.6 from PostgreSQL repos
> * CentOS 7
> * FreeIPA for Kerberos, LDAP, etc.
> * Realm A.DOMAIN.TLD
> * "user1" database exists
> * "user1" role exists
> * Logging into CentOS usernames are configured to drop the domain,
> so 
> they appear as "user1" rather then "us...@a.domain.tld".
> 
> 
> pg_hba.conf:
> 
> local   all postgrespeer
> hostall all 127.0.0.1/32md5
> hostall all ::1/128 md5
> hostall all 192.168.1.0/24  gss 
> include_realm=1 map=testnet krb_realm=A.DOMAIN.TLD #This is on one
> line. 
> Thunderbird is truncating lines.
> 
> 
> pg_ident.conf:
> 
> testnet/^([0-9A-Za-z_-]+)@A\.DOMAIN\.TLD$\1
> testnet/^([0-9A-Za-z_-]+)$ \1
> 
> 
> Regex that works for both in regexr.com:
> 
> /^([0-9A-Za-z-_]+)(@A\.DOMAIN\.TLD)?$/gm
> 
> 
> Command and lines from pg_log:
> 
> $ psql -h db0 # Logged in as user1 with Kerberos ticket
> 
> < 2017-09-09 19:50:49.376 CDT - 192.168.1.201 [unknown] > LOG: 
> connection received: host=192.168.1.201 port=44918
> < 2017-09-09 19:50:49.398 CDT - 192.168.1.201 user1 >
> LOG:  connection 
> authorized: user=user1 database=user1
> < 2017-09-09 19:50:50.912 CDT - 192.168.1.201 user1 > LOG: 
> disconnection: session time: 0:00:01.537 user=user1 database=user1 
> host=192.168.1.201 port=44918
> 
> $ psql -h db0 -U us...@a.domain.tld # Logged in as user1 with
> Kerberos 
> ticket
> 
> < 2017-09-09 19:50:54.959 CDT - 192.168.1.201 [unknown] > LOG: 
> connection received: host=192.168.1.201 port=44920
> < 2017-09-09 19:50:55.023 CDT - 192.168.1.201 us...@a.domain.tld >
> LOG: 
> no match in usermap "testnet" for user "us...@a.domain.tld" 
> authenticated as "us...@a.domain.tld"
> < 2017-09-09 19:50:55.023 CDT - 192.168.1.201 us...@a.domain.tld > 
> FATAL:  GSSAPI authentication failed for user "us...@a.domain.tld"
> < 2017-09-09 19:50:55.023 CDT - 192.168.1.201 us...@a.domain.tld > 
> DETAIL:  Connection matched pg_hba.conf line 87: "host   all
>  all 192.168.1.0/24  gss include_realm=1 
> map=testnet krb_realm=A.DOMAIN.TLD"
> 
> 
> Is this something that is possible, or is it something where I need
> to 
> pick one way to do it?
> 
> Thanks in advance,
> Ryan
> 
> 


Hello,
I think you need a line in your pg_hba.conf file along the lines of:-

testnet all all 192.168.1.0/24 gss

as the error message says it can't find this relationship.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function not inserting rows

2017-08-23 Thread rob stone
Hello,

On Wed, 2017-08-23 at 17:23 +0200, Frank Foerster wrote:
> 
> 
> 
> But the created statement looks syntax-wise identical to the pgadmin-
> statement (except for the forced error of course):
> 
> select * from api_dev.add_texts_to_item( 444, array['PGADM1',
> 'PGADM2'] );
> 
> 
> 

I don't use python but the traditional way to call a function is:-

select api_dev.add_texts_to_item( 444, array['PGADM1','PGADM2'] );


HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sequence used on null value or get the max value for a column whith concurrency

2017-08-11 Thread rob stone


On Thu, 2017-08-10 at 19:05 -0300, marcelo wrote:
> In some table, I have a bigint column which at the app level can be 
> null. Call it "DocumentNumber", and of course is not the PK.
> In most cases, the applications give some value to the column.
> 
> But sometimes, the value remains null, expecting the backend or
> someone 
> assign it a unique value.
> 
> Could I use a sequence only when the field arrives to the backend as 
> null? How? Using a triger?
> 
> Alternatively:
> 
> How could I get the max value for the column and increment it by
> one, 
> but with concurrency warranty? Something as a table lock?
> 
> TIA
> 
> 
> 
> 
> 


Hello Marcelo,

I haven't tested this but if you define the column thus:-

document_number bigint default
nextval('my_document_number_sequence'::regclass)

then on insert, if that column is not in the values list, then the next
available number from the sequence will be used.

Remember, that if a rollback occurs, the sequence number is lost
forever.

HTH,
Robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid byte sequence for encoding

2017-08-09 Thread rob stone


On Wed, 2017-08-09 at 14:21 +0200, basti wrote:
> Hello,
> i have a webapp convert from ascii to uft8.
> 
> Now I get in postgres
> 
> ERROR:  invalid byte sequence for encoding "UTF8": 0xfc
> 
> Now I try to log all queries with log_statement = 'all'.
> All queries are longed expected this one.
> 
> Is there a way to debug this.
> 
> Best Regards,
> Basti
> 
> 

Hello Basti,

Some more information would be helpful, such as:-
1) Postgres version.
2) The query pulling the error.
3) Table/view definitions.
4) O/S version.
5) How did you actually convert to UTF-8.

Regards,
Robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread rob stone


On Wed, 2017-07-26 at 18:31 -0700, Joshua D. Drake wrote:
> On 07/26/2017 05:58 PM, Tim Uckun wrote:
> > I am waiting till pgadmin4 gets a bit faster. It seems kind of slow
> > on 
> > my mac.
> 
> PostgreSQL Studio
> Navicat
> EMS PostgreSQL Manager
> PgAdmin3 LTS (BigSQL)
> 
> JD
> 
> 


If you are a Java person, go to executequery.org and download the jar
file or the Deb package and install it.
It has the advantage of being able to work on multiple RDBMSs' at the
same time. All you need are the appropriate JDBC's. For example, you
could be connected to a Postgres database and also an Oracle one.

HTH.
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get transaction started always in WRITE mode.

2017-07-25 Thread rob stone


On Tue, 2017-07-25 at 11:32 +0200, Adam Šlachta wrote:
> Hello,
>  
> In short: Is there any way how to setup PostgreSql 9.6 to always
> start a transaction in WRITE mode?
>  
> Our related configuration:
> "default_transaction_isolation" --> "read committed"
> "default_transaction_read_only" --> "off"
>  
>  
> Longer description (for those who are interested, since it is not
> only PostgreSQL related):
>  
> We are facing problems with "cannot execute  in
> a read-only transaction" exception
> (org.postgresql.util.PSQLException).
> It is very likely the problem is caused by our code, however at the
> moment the fastest solution before we solve the things properly would
> be to setup WRITE mode for all started transactions on a database-
> setup-level.
>  
> SW we use:
> -> Java 8
> -> Hibernate 5.1.2
> -> spring-data-jpa 1.10.4.RELEASE
> -> spring-beans, spring-core, other spring stuff of version
> 4.2.8.RELEASE
>  
> Related configuration (I don't want to spam here with long list of
> configuration files so I pick-up what I consider important):
> Hibernate -> first & second level cache switched OFF
> SessionFactory ->
> org.springframework.orm.hibernate5.LocalSessionFactoryBean
> transactionManager ->
> org.springframework.orm.jpa.JpaTransactionManager
> Spring @Transactional(read-only) hint -> where we could we set it to
> "false"
> Our typical @Repository extends
> org.springframework.data.jpa.repository.JpaRepository, which uses
> implementation from
> org.springframework.data.jpa.repository.support.SimpleJpaRepository.
>  
> Thank you very much for any hints.
> Adam Slachta
> 
> 

Hello Adam,

There is a Hibernate parameter that overrides the database's default
isolation level:-

hibernate.connection.isolation

Are you certain that second level caching is off?
You can make a class in a cache immutable by:-



Don't know if that helps but your problem seems to be a configuration
issue in Hibernate.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backward compatibility

2017-07-22 Thread rob stone


On Sat, 2017-07-22 at 10:49 -0400, Tom Lane wrote:
> Igor Korot  writes:
> > But it works incorrectly - it should return:
> > 9.5.7 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 6.3.1
> > 20161221
> > (Red Hat 6.3.1-1), 64-bit
> > i.e. without the word "PosgreSQL", since '\s' should match the
> > (first)
> > space in the version().
> 
> position() is not a regex operation, it's just a plain substring
> match.
> 
> regression=# SELECT  position( '\s' in version() ) ;
>  position 
> --
> 0
> (1 row)
> 
> You hardly need any flexibility for this anyway, so I'd just do
> 
> regression=# SELECT  position( ' ' in version() ) ;
>  position 
> --
>    11
> (1 row)
> 
> Although possibly what you really want is split_part().
> 
> regression=# select split_part(version(), ' ', 2);
>  split_part 
> 
>  9.5.7
> (1 row)
> 
>   regards, tom lane
> 
> 


An alternative select:-


SELECT version(), (regexp_split_to_array( version(), E'\\s+'))[2]


Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inconsistent performance with LIKE and bind variable on long-lived connection

2017-06-10 Thread rob stone
Hi

On Sat, 2017-06-10 at 09:17 -0700, Steven Grimm wrote:
> On Sat, Jun 10, 2017 at 1:46 AM, Alban Hertroys 
> wrote:
> > I notice that you're declaring your ResultSet variable inside the
> loop, which means that you create and destroy it frequently. I've
> been told that this is a pattern that the GC has trouble keeping up
> with (although that was around the Java 5 era), so you might be
> seeing the effects of memory churn in your client instead of in the
> database.
> 
> Sadly, no, it doesn't help. I'd be pretty surprised if that was it,
> though, given that it consistently happens with the bind variable and
> never happens without; surely the result set's GC behavior would be
> the same in either case?
> 
> For grins, I tried running with the -verbosegc option to log GC
> activity:
> 
> Two bind variables      48  16  13   8   6   5   6   7   5 115 110
> 109 132 108 110 113 109 113 108 108
> Equality bind variable   5   6   6   5   6   6   6   6   7   6   8  
> 8   5   6   4   5   5   5   5   5
> [GC (Allocation Failure)  65536K->2293K(251392K), 0.0035075 secs]
> LIKE bind variable       5   5   6   5   6   5   5   6   6 110 107
> 112 116 118 107 112 115 105 104 104
> No bind variables        5   5   4   5   5   4   5   5   4   6   5  
> 6   5   8   4   4   4   4   5   4
> 
> So there's only one GC run and it takes about 3 milliseconds. That
> result is the same whether the ResultSet is declared inside or
> outside the loop.
> 
> -Steve




I ran your test and here are my results:-


:~/softdev/java_apps$ java BindTest
Two bind
variables 402  15  13  14  13  13  15  14   9  39  46  45  41  39  
41  38  39  38  40  38
Equality bind
variable   9  15   9  10  12  13  10  13  10   9   9   9   7   8   7   
8   8  10   8   7
LIKE bind
variable   9   9   9   8  10   8   8   8   8  40  40  41  37  38  3
9  39  39  42  38  38
No bind
variables   13   9   9   9   9  11  10   8   9   8   9  10   8   9 
  9   8   9   9   8  12
:~/softdev/java_apps$ java BindTest
Two bind
variables  57  17  17  15  15  14  13  14  13  38  47  42  40  39  
39  41  37  39  38  37
Equality bind
variable   8   9   9  10  14  15  10  13  10  10  10   8   8   9  13   
8   9   8   9   8
LIKE bind
variable  10   9  11  10   9   8   9   8   8  39  38  43  43  39  3
9  37  38  38  43  39
No bind
variables9   9  11   9   9   9   8   9   9   8   9  10   9  15 
 10   9   9   9   9  13



Using Java version 1.8.0_131

postgresql-9.4.1212.jar

PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-
18) 6.3.0 20170516, 64-bit

You'll notice that the first time it ran the database was "cold" and it
took 402, whereas the second time it dropped to 57.

If I have time today I might alter it to used named variables and see
if that makes a difference.


Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT statement with sub-queries

2017-05-28 Thread rob stone
Hello,

On Sun, 2017-05-28 at 20:03 +0300, Michelle Konzack wrote:
> Hello *,
> 
> I try to get columns from my database with a singel SELECT, but I
> stuck.
> 
> I have 3 tables like:
> 
> 
> 1)  categories (serial,cat)
> 
> 2)  manufacturers (serial,m_name)
> 
> 3)  products (serial,category,manufacturer,p_name)
> 
> 
> where the "category" and "manufacturer"
> are numerical IDs from the two tables above.
> 
> So I like to replace the numerical IDs with the appropriated values,
> mean
> 
> SELECT * FROM products WHERE category IN 
> (SELECT categories.cat FROM categories WHERE 
> categories.serial==products.category);
> 
> But I get:
> 
> ERROR:  operator does not exist: integer==integer
> LINE1: ...gories.cat FROM categories
> WHERE  categories.serial==products.category);
>  ^
> Hmm, "serial" is an "integer" and the "category" and "manufacturer"
> too.
> 
> So whats wrong with it?
> 
> 

You only need a single equals sign in SQL.

SELECT * FROM products WHERE category IN 
    (SELECT categories.cat FROM categories WHERE 
    categories.serial = products.category);

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to connect to Postgresql

2017-04-10 Thread rob stone
Hello John,

On Mon, 2017-04-10 at 16:23 -0400, John Iliffe wrote:
> On Monday 10 April 2017 11:53:35 Daniel Verite wrote:
> >   John Iliffe wrote:
> > > So, the problem is resolved, although I have no idea why it was
> > > necessary.
> > 
> 

Good to hear.

> > 
> 
> Yes, I will do that, but there are several hundred PHP web page
> scripts to 
> be updated.  Presumably if one script opens two different databases
> then 
> both of the pg_connect() instances will need to be updated.
> 

Are you saying that every single PHP script has its own separate
routine to connect to the database?

We have just a single class to do connection stuff and the parameters
are held in pg_service.conf. That file has to be readable by Apache.
You just pass in an argument that represents the database to which you
want to connect, and it grabs host, port, etc. out of the service file.
All you have to do is keep the service file up-to-date.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to connect to Postgresql

2017-04-09 Thread rob stone


On Sun, 2017-04-09 at 20:09 -0400, John Iliffe wrote:
> 
> > 
> > You have Apache, PHP, and Postgres all running on your LAN at
> > 192.168.1.6.
> > 
> > 
> > You are on 192.168.1.10.
> > 
> > Your NOT doing "php -f testfcgi.php", so how does Apache "know" to
> run
> > that script?
> 
> testfcgi.php is in the document root of the default named virtual
> server.
> 

O.K.

So in sites-available, your conf file just tells Apache to run
testfcgi.php and nothing else?
There is no resolution required by obtaining the IP address from
/etc/hosts.

Are you able to put some trigger_error messages into testfcgi.php in
order to make sure Apache is running the correct program?

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to connect to Postgresql

2017-04-09 Thread rob stone
Hello John,

Just saw this message.


> 
> Still set to the default:
> 
> #listen_addresses = 'localhost' # what IP address(es) to
> listen on;
>     # comma-separated list of 
> addresses;
>     # defaults to 'localhost';
> use '*' 
> for all
>     # (change requires restart)
> #port = 5432    # (change requires restart)
> 
> I did change the Unix domain socket directories:
> 
> #unix_socket_directories = '/tmp'   # comma-separated list of 
> directories
> unix_socket_directories = '/tmp,/var/pgsql' # *changed from
> default
> # 
> 
> 

Your set-up has Apache, PHP and Postgres all running from the same
machine.
So as far as running from there goes, it is "localhost".
There is no requirement to traverse a network. It is all on the same
physical machine.

Alter your postgresql.conf file and remove the hash so that:-

listen_addresses = 'localhost'

is explicitly defined. Alter pg_hba.conf so that localhost is declared
and let's see what happens.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to connect to Postgresql

2017-04-09 Thread rob stone
Hello John,


> > 
> > > PHP and Postgresql are both running on same box.  It does have
> two
> > > interfaces, 192.168.1.6 and 192.168.1.7, and the test programme
> is
> > > available on either.  The reference to 192.168.1.10 is the client
> > > machine, in this case my workstation, which is 192.168.1.10.
> > 
> > Here is the error message you said came from the box running PHP:
> > >> > --
> > >> > [Sun Apr 09 14:08:16.178126 2017] [php7:warn] [pid 24917:tid
> > >> > 139671464015616] [client 192.168.1.10:59260] PHP Warning:
> > >> > pg_connect(): Unable to connect to PostgreSQL server: could
> not
> > >> > connect to server: Connection refused\n\tIs the server running
> on
> > >> > host 192.168.1.6 and accepting\n\tTCP/IP
> connections on
> > >> > port 5432? in
> > >> > /httpd/iliffe/testfcgi.php on line 132
> > >> > ---
> > 
> 

This is a tad confusing to me.

You have Apache, PHP, and Postgres all running on your LAN at
192.168.1.6.


You are on 192.168.1.10.

Your NOT doing "php -f testfcgi.php", so how does Apache "know" to run
that script?

Cheers,
Rob






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A change in the Debian install

2017-04-07 Thread rob stone
Hello Adrian,



On Thu, 2017-04-06 at 21:24 -0700, Adrian Klaver wrote:
> On 04/06/2017 08:01 PM, rob stone wrote:
> > 
> > 
> > 
> That is the default location and is generally the case in a source 
> install. Package maintainers can and do often put them elsewhere.
> AFAIK 
> the Debian/Ubuntu way using postgresql-common is to put them in 
> /etc/postgresql/version/cluster_name/. So on one of my Ubuntu 16.04 
> installs with Postgres 9.6.2:
> 
> aklaver@arkansas:~$ l /etc/postgresql/9.6/main/
> total 56
> drwxr-xr-x 2 postgres postgres  4096 Feb 11 16:23 ./
> drwxr-xr-x 3 postgres postgres  4096 Feb 11 07:15 ../
> -rw-r--r-- 1 postgres postgres   315 Feb 11 07:15 environment
> -rw-r--r-- 1 postgres postgres   143 Feb 11 07:15 pg_ctl.conf
> -rw-r- 1 postgres postgres  4642 Feb 11 16:23 pg_hba.conf
> -rw-r- 1 postgres postgres  1636 Feb 11 07:15 pg_ident.conf
> -rw-r--r-- 1 postgres postgres 22438 Feb 11 16:11 postgresql.conf
> -rw-r--r-- 1 postgres postgres   317 Feb 11 07:15 start.conf
> 
> 
> and PGDATA:
> 
> aklaver@arkansas:~$ sudo ls -al /var/lib/postgresql/9.6/main/
> total 92
> drwx-- 19 postgres postgres 4096 Apr  6 15:54 .
> drwxr-xr-x  3 postgres postgres 4096 Feb 11 07:15 ..
> drwx--  6 postgres postgres 4096 Feb 11 16:06 base
> drwx--  2 postgres postgres 4096 Mar 22 12:22 global
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_clog
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_commit_ts
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_dynshmem
> drwx--  4 postgres postgres 4096 Feb 11 07:15 pg_logical
> drwx--  4 postgres postgres 4096 Feb 11 07:15 pg_multixact 
> drwx--  2 postgres postgres 4096 Mar 22 12:21 pg_notify 
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_replslot 
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_serial 
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_snapshots 
> drwx--  2 postgres postgres 4096 Mar 22 12:21 pg_stat 
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_stat_tmp 
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_subtrans 
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_tblspc 
> drwx--  2 postgres postgres 4096 Feb 11 07:15 pg_twophase 
> -rw---  1 postgres postgres4 Feb 11 07:15 PG_VERSION 
> drwx--  3 postgres postgres 4096 Feb 11 07:15 pg_xlog 
> -rw---  1 postgres postgres   88 Feb 11 07:15
> postgresql.auto.conf 
> -rw---  1 postgres postgres  133 Mar 22 12:21 postmaster.opts 
> -rw---  1 postgres postgres  100 Mar 22 12:21 postmaster.pid
> 

> This is why I am trying to figure out if you are using the Debian
> packaging:
> 
> A) What is your PGDATA?
> 
> B) How postgresql.conf got there?
> 
> The reason it is important is that the next update will probably
> land 
> you back in the same situation that started this thread, unless we 
> figure out what is going on.
> 
> Another question:
> 
> Have you ever installed Postgres on this machine using something
> other 
> then the Debian packages?
> 


Everything is done via a bog standard install from Debian repos.

The installation process creates a path containing these files:-

/etc/postgresql/9.5/main# ls -al
total 56
drwxr-xr-x 2 postgres postgres  4096 Jul  7  2016 .
drwxr-xr-x 3 postgres postgres  4096 Feb  3  2016 ..
-rw-r--r-- 1 postgres postgres   315 Feb  3  2016 environment
-rw-r--r-- 1 postgres postgres   143 Feb  3  2016 pg_ctl.conf
-rw-r- 1 postgres postgres  4641 Feb  3  2016 pg_hba.conf
-rw-r- 1 postgres postgres  1636 Feb  3  2016 pg_ident.conf
-rw-r--r-- 1 postgres postgres 21869 Feb  3  2016 postgresql.conf
-rw-r--r-- 1 postgres postgres   378 Feb  3  2016 start.conf

I can't show you the 9.6 ones as I removed them but they are on a
backup. However, the number of files is exactly as above.

The binaries are installed in /usr/lib/postgresl/VN/bin

You create a new cluster by running initdb:-

initdb -D /path/to/my/new/cluster

That path has to exist and be empty otherwise initdb throws an error
and exits. It then creates the sub-directories for pg_log, etc. and
drops in the three conf files as defaults.
You need to edit those conf files and set the parameters for your site.
E.g., autovacuum, work memory, etc.

Your PGDATA environment variable has to point to:-
/path/to/my/new/cluster
which means you need to set that up accordingly, depending on the
applications accessing that cluster.

You can create multiple clusters all running from the same set of
binaries.

I'm quite happy with the way in which the Debian packages are set up as
it means you can have multiple clusters and each can have its own
postgresql.conf file.
For example you might have a production database streaming to a slave,
a playpen database where user

Re: [GENERAL] A change in the Debian install

2017-04-06 Thread rob stone


On Thu, 2017-04-06 at 15:58 -0700, Adrian Klaver wrote:
> On 04/06/2017 03:16 PM, rob stone wrote:
> > 
> > 
> 
> Which is what has me confused. If you are using the postgresql-
> common 
> system then the *.conf files should be in 
> /etc/postgresql/version/cluster_name/.
> 
> Where exactly is PGDATA and why is the *.conf file there?
> 

It's always been in the PGDATA path and it has always worked.
The doco implies that PGDATA is the place for the *.conf files unless
overridden by the config_file parameter.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A change in the Debian install

2017-04-06 Thread rob stone


On Wed, 2017-04-05 at 21:14 -0700, Adrian Klaver wrote:
> 
> What repos are you using, the Debian or the PGDG one?
> 
> I guess the question I should really ask is, are you using a repo or 
> some other method to upgrade?
> 
> 

Upgraded from standard Debian repos.

Nothing else was changed other than the binaries.
It appears that the prior version obtained postgresql.conf from the
PGDATA path whilst the 9.6.2-2 looked in the Postgres /etc paths where
it tried to use the conf file, and as a consequence, the autovacuum
daemon was not started.
I removed the Postgres /etc conf file and as it didn't find one there,
fell back to the PGDATA path and all is well.


> That is packaging dependent. When using the Debian/Ubuntu 
> postgresql-common system the postgresql.conf will be in 
> /etc/postgresql/version/cluster_name/
> 

I'm quite happy with the standard /etc/postgresql/major_version/main
method of installation. It means you can run different major versions
on the same box.


Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] A change in the Debian install

2017-04-05 Thread rob stone
Hello,

Postgres is started via pg_ctl and NOT systemd.
Below are the log entries when version 9.6.2-1 is started.

2017-04-04 07:15:27 AESTLOG:  database system was shut down at 2017-04-
03 13:08:27 AEST
2017-04-04 07:15:28 AESTLOG:  MultiXact member wraparound protections
are now enabled
2017-04-04 07:15:28 AESTLOG:  database system is ready to accept
connections
2017-04-04 07:15:28 AESTLOG:  autovacuum launcher started


Upgraded to version 9.6.2-2 and these are the log entries on start-up:-

2017-04-05 08:03:29 AESTLOG:  test message did not get through on
socket for statistics collector
2017-04-05 08:03:29 AESTLOG:  disabling statistics collector for lack
of working socket
2017-04-05 08:03:29 AESTWARNING:  autovacuum not started because of
misconfiguration
2017-04-05 08:03:29 AESTHINT:  Enable the "track_counts" option.
2017-04-05 08:03:29 AESTLOG:  database system was shut down at 2017-04-
04 13:05:46 AEST
2017-04-05 08:03:30 AESTLOG:  MultiXact member wraparound protections
are now enabled
2017-04-05 08:03:30 AESTLOG:  database system is ready to accept
connections

It is ignoring the PGDATA path and obtaining postgresql.conf from
/etc/postgresql/9.6/main.

Removed all the Postgres conf files from the /etc path and it is back
working as intended.


Is this just something the Debian package maintainers have done or do
we have to alter the start-up scripts to specify the config_file
setting?

The doco says "By default, all three configuration files are stored in
the database cluster's data directory." which IMHO means where PGDATA
is pointing.

Has anybody else struck this issue?

Cheers,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to define the limit length for numeric type?

2017-03-12 Thread rob stone
Hello,

On Sat, 2017-03-11 at 22:14 -0800, vod vos wrote:
> Hi everyone,
> 
> How to define the exact limit length of numeric type? For example, 
> 
> CREATE TABLE test  (id serial, goose numeric(4,1));
> 
> 300.2 and 30.2 can be inserted into COLUMN goose, but I want 30.2 or
> 3.2 can not be inserted, how to do this?
> 
> Thank you.
> 
> 
> 


Assuming that column goose may only contain values ranging from 100.0
to 999.9, then a check constraint along the lines of:-

goose > 99.9 and < 1000

should do the trick.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, apps, special characters and UTF-8 encoding

2017-03-07 Thread rob stone
Hi Ken,

On Tue, 2017-03-07 at 15:20 -0800, Ken Tanzer wrote:
> Hi.  I've got a recurring problem with character encoding for a
> Postgres-based web PHP app, and am hoping someone can clue me in or
> at least point me in the right direction.  I'll confess upfront my
> understanding of encoding issues is extremely limited.  Here goes.
> 
> The app uses a Postgres database, UTF-8 encoded.  Through their
> browsers, users can add and edit records often including text.  Most
> of the time this works fine.  Though sometimes this will fail with
> Postgres complaining, for example, "Could query with ... , The error
> text was: ERROR: invalid byte sequence for encoding "UTF8": 0xe9 0x20
> 0x67"
> 
> So this generally happens when people copy and paste things out of
> their word documents and such.
> 
> As I understand it, those are likely encoded in something non-UTF-8,
> like WIN-1251 or something.  And that one way or another, the
> encoding needs to be translated before it can be placed into the
> database.  I'm not clear how this is supposed to happen though. 
> Automatically by the browser?  Done in the app?  Some other way?  And
> if in the app, how is one supposed to know what the incoming encoding
> is?
> 
> Thanks in advance for any help or pointers.
> 
> Ken
> 
> 
> 

1) Make sure the text editor you use to create your pages, etc. uses
UTF-8 as its encoding. That way the file's BOM is set correctly.
2) Make sure your headers contain the following:-



or 

 which is HTML5, however the long version is
still recognised by HTML5.

I understand that some versions of IE have issues with correctly
determining the character set, so, unfortunately, you still have to
verify that user input is UTF-8 compatible.

HTH,
Rob 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CentOS 7.3, PostgreSQL 9.6.2, PHP 5.4 deliver array as string

2017-03-02 Thread rob stone
Hi Alex,

On Thu, 2017-03-02 at 21:45 +0100, Alexander Farber wrote:
> Good evening!
> 
> I am calling this stored function - 
> 
> CREATE OR REPLACE FUNCTION words_get_board(
>                 in_gid integer
>         ) RETURNS TABLE (
>                 out_bid integer,
>                 out_letters varchar[15][15],
>                 out_values integer[15][15]
>         ) AS
> $func$
>         SELECT 
>                 bid,
>                 letters,
>                 values
>         FROM    words_games
>         WHERE   gid = in_gid;
> 
> $func$ LANGUAGE sql STABLE;
> 
> by a PHP script -
> 
>         $sth = $dbh->prepare(SQL_GET_BOARD);
>         $sth->execute(array($gid));
>         if ($row = $sth->fetch(PDO::FETCH_ASSOC)) {
>                 $bid     = $row['bid'];
>                 $letters = $row['letters'];
>                 $values  = $row['values'];
>         }
> 
> 

Don't use PDO but assuming you want to loop through a result set,
shouldn't this line:-

if ($row = $sth->fetch(PDO::FETCH_ASSOC)) {

be

> while ($row = $sth->fetch(PDO::FETCH_ASSOC)) {

Also you need to test if the result set is empty or not.

I don't have an "easy" way to convert Postgres arrays into PHP style
arrays. What I ended up doing was to use the array_to_string function
in the select statement using the underscore character as the delimiter
in the string, and then converting the string to a PHP array via an
explode. However, I've only had to do this with single level arrays.

Don't know if this helps as you haven't supplied the table definition
of words_games.

Cheers,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Query parameter types not recognized

2017-02-10 Thread rob stone
Hello Roberto,
On Fri, 2017-02-10 at 16:43 -0500, Roberto Balarezo wrote:
> Hi Rob,
> 
> Thanks for your answer. The query is just an example I made to
> illustrate the problem. In the database I'm working with, duedate is
> a timestamp without timezone column, which can contain null values.
> The parameter is supposed to be of type DATE. From Java, I'm sending
> a Date object (which contains no timezone information, so the driver
> should not have problem with this). So if the field duedate has a
> null value, a default date with one day added is returned.
> I read that the driver has problems with timestamp columns, because
> it cannot tell the server if it is a timestamp with or without
> timezone, but dates should not present this problem. The server
> should know it is of DATE type.
> 
> PS: I have changed the code of the application to send the value
> (defaultDate + 1 day) calculated in the application and sent this as
> a parameter to make it work, but there are many queries like this and
> I would like to know why it happens and if I can make it work
> changing the query and not the code.
> 

If the column duedate is defined as a timestamp, then setDate is not
the answer. java.sql.Date is just a "date".
java.util.Date is a timestamp object but from 1.8 onwards it is pretty
well deprecated in favour of the Calendar methods. You can of course
have 0:0:0 as the time part.
All I can suggest is a spot of reading the docs about Calendar and
formatting your (defaultDate + 1) as a timestamp.


HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Query parameter types not recognized

2017-02-10 Thread rob stone
Hello Roberto,
On Fri, 2017-02-10 at 10:17 -0500, Roberto Balarezo wrote:
> Hi, I would like to know why this is happening and some advice if
> there is a way to solve this problem:
> 
> I have a query like this:
> 
> select COALESCE(duedate, ? + 1) from invoices order by duedate desc
> limit 10;
> where ? is a query parameter. I’m using JDBC to connect to the
> database, and sending parameters like this:
> 
> query.setDate(1, defaultDueDate);
> Where defaultDueDate is a java.sql.Date object. However, when I try
> to execute the query, I get this error:
> 
> org.postgresql.util.PSQLException: ERROR: COALESCE types timestamp
> without time zone and integer cannot be matched
> Why is it inferring that the type is integer, when I send it as
> Date??
> 
> 

Prepared statement's set.Date applies the current server timezone to
the value. So, if the database column duedate is of type DATE, it can't
interpret what you are trying to do.

If duedate can be null, then I really don't understand your query. 

HTH.

Rob





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery Assistance

2017-01-28 Thread rob stone
Hello Brian,
On Sun, 2017-01-29 at 07:16 +1100, Brian Mills wrote:
> Hi,
> 
> No, it hasn't changed since the first time I looked at it. 
> 
> root@atlassian:/home/tbadmin# ps ax | grep post
>  1364 ?        Ss     0:00 /usr/lib/postfix/master
>  5198 pts/3    S      0:00 su postgres
>  5221 pts/3    S      0:00 /usr/lib/postgresql/9.3/bin/postgres -D
> /etc/postgresql/9.3/main
>  5222 ?        Ss     0:10 postgres: startup process   recovering
> 0001000500A3
> 11161 pts/4    S+     0:00 grep --color=auto post
> 


Does this WAL file exist "0001000500A3"?

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ajuda com definição

2017-01-24 Thread rob stone

On Tue, 2017-01-24 at 17:35 -0200, Márcio A. Sepp wrote:
> Boa tarde,
> 
> 
> 
> Tenho um caso onde o campo chave da tabela irá receber dois tipos de
> informação: integer de tamanho 5 e integer de tamanho 7.
> O problema disso é que se eu criar o campo como sendo integer, lá
> pelas
> tantas corro o risco de dar violação de PK.
> 
> As soluções possíveis seriam criar o campo como varchar(7) ou colocar
> um
> segundo campo na chave para identificar a informação.
> 
> A considerar:
> 99,9% dos registros desta tabela são de tamanho 7.
> 
> Dutra e demais da lista, qual a forma mais correta de modelar isso?
> 
> 
> --
> Att.
> Márcio A. Sepp
> 
> 
> 

Olá Márcio,

Este PK vai ser usado como um FK em outras tabelas?

A solução depende nisso.

Att.,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY: row is too big

2017-01-04 Thread rob stone
Hello,
On Wed, 2017-01-04 at 07:11 -0800, Adrian Klaver wrote:
> On 01/04/2017 06:54 AM, Pavel Stehule wrote:
> > Hi
> > 
> > 2017-01-04 14:00 GMT+01:00 vod vos  > >:
> > 
> > __
> > Now I am confused about I can create 1100 columns in a table in
> > postgresql, but I can't copy 1100 values into the table. And I
> > really dont want to split the csv file to pieces to avoid
> > mistakes
> > after this action.
> > 
> > 
> > The PostgreSQL limit is "Maximum Columns per Table250 - 1600
> > depending
> > on column types" - this limit is related to placing values or
> > pointers
> > to values to one page (8KB).
> > 
> > You can hit this limit not in CREATE TABLE time, but in INSERT
> > time.
> > 
> > 
> > 
> > I create a table with 1100 columns with data type of varchar,
> > and
> > hope the COPY command will auto transfer the csv data that
> > contains
> > some character and date, most of which are numeric.
> > 
> > 
> > Numeric is expensive type - try to use float instead, maybe double.
> 
> If I am following the OP correctly the table itself has all the
> columns 
> declared as varchar. The data in the CSV file is a mix of text, date
> and 
> numeric, presumably cast to text on entry into the table.
> 
> > 
> > Regards
> > 
> > Pavel
> > 
> > 
> > I use the command: COPY rius FROM "/var/www/test/test.csv" WITH
> > DELIMITER ';' ;
> > 
> > Then it shows:
> > 
> > ERROR:  row is too big: size 11808, maximum size 8160
> > 
> > 


Assuming this is a brand new database instance and not an existing
application, could the OP not compile from source and specify the
 --with-blocksize=16384 so as to overcome the 8k default page size
limit?

My 2 cents.
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [OT] Postgresql and PHP

2016-12-23 Thread rob stone
Hello,
On Fri, 2016-12-23 at 16:44 +0100, Alessandro Baggi wrote:
> Hi list,
> sorry for this OT.
> 
> I have a table on postgresql like this:
> 
> id serial not null,
> srcaddr varchar(16) not null
> 
> I use this table to store ip address. I've used also inet type but 
> changed to see if this solves my problem.
> 
>  From psql, I run:
> 
> select count(*) from bs_ipsource where srcaddr = '192.168.1.1';
> 
> 

I store and retrieve IP Addresses as follows:-

id serial not null,
srcaddr inet not null

select count(*) from bs_ipsource where srcaddr = '192.168.1.1'::inet;

Cast the textual representation to inet.

Never had a problem.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plpgsql - Custom fields Postgres 9.5

2016-12-14 Thread rob stone

On Wed, 2016-12-14 at 17:00 -0700, David G. Johnston wrote:
> On Wed, Dec 14, 2016 at 4:49 PM, Patrick B 
> wrote:
> > ERROR:  function logextract(integer, integer) does not exist
> > LINE 1: select logextract(20160901,20161001);
> > 
> 
> So change the constants you are passing into your function to text
> (i.e., surrounding them with single quotes) so it matches the new
> function signature.
> 
> There exists an element of understanding the options you are being
> given and adapting if something basic like this is overlooked.
> 
> David J.


1) Have you run a \df+ and made sure the function has been created
correctly?

2) In your first post there is a single apostrophe after the execute
instruction. Can't see the closing apostrophe but then my eyesight is
not the best.

3) I've always found it easier to TO_CHAR a date column when using it
for comparison purposes.

HTH.
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No select privileges when not connecting from login postgres

2016-12-03 Thread rob stone

On Wed, 2016-11-30 at 20:48 -0800, Joseph Brenner wrote:
> I'm trying to get a new build of 9.6.1 working on a machine
> running Debian stable (jessie) and I'm seeing some odd
> behavior where things work correctly if I run psql when
> logged in as postgres, but if I run it as user 'doom' (my
> usual login), I don't seem to have any select privileges.
> Even this fails silenlty:
> 
>   select 'world' as hello;
> 
> But if run logged in as postgres, all is well:
> 
>   sudo su - postgres
>   /usr/local/pgsql/bin/psql --dbname=doom --username=doom
>   doom=#   select 'world' as hello;
> select 'world' as hello;
> hello
>    ---
> world
>    (1 row)
> 
> 

I run Debian testing (stretch/sid).

For years now, the Postgres binaries are stored in:-

/usr/lib/postgresql//bin

and are root:root owned.

E.g.:-

/usr/lib/postgresql/9.6/bin/psql appears as:-

-rwxr-xr-x 1 root root  580864 Nov  2 21:04 psql

So, it is executable by anybody, which is not an issue with me.

The reason behind this path convention is to enable you to run
different major versions of Postgres on the same computer.

I don't know how you installed Postgres and all I can suggest for
starters is to physically remove it and re-install from the Debian
packages.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using UPDATE ... RETURNING in a custom SQL function, which RETURNS integer

2016-12-02 Thread rob stone


> And finally here is the failing usage of the function :
> 
> words=> SELECT uid FROM words_unban_user(1);
> ERROR:  column "uid" does not exist
> LINE 1: SELECT uid FROM words_unban_user(1);
>    ^
> 
> 
Shouldn't you be doing:-

SELECT words_unban_user(1);




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About the MONEY type

2016-12-01 Thread rob stone
My two cents . . .
On Wed, 2016-11-30 at 13:35 -0800, John R Pierce wrote:
> On 11/30/2016 12:16 PM, John McKown wrote:
> > Speaking generically, I guess maybe MONEY needs to be somewhat
> > like a TIMESTAMP. At least in PostgreSQL, a TIMESTAMP can contain
> > a TIMEZONE. I guess a MONEY type should contain a modifier
> > identifying the issuer of the currency (E.g. U.S. Dollar vs
> > Canadian Dollar vs. Yen vs. Yuan vs. "precious metal").
> > 
> > 
>  
> and then it would need to be able to convert between all those
> units?    great fun.   it probably needs a time too, as those
> conversion units vary with time.   worse, they vary with where you
> convert the money and which way, and how much the converter
> skims  In the real world,  US$ -> € -> US$  will not give you
> back the same amount.
> 
> 


I don't believe the OP is talking about currency conversions using
exchange rates.

It sounds like he would like a printf style string held in the same
column so that a select of that column would return a string formatted
by the printf style arguments, and presumably any arithmetic operations
would return the correct result. Complicated.

Currently, working in multi-currency environments you need to have
three columns -- one defined as NUMERIC(15,3) another to hold the ISO
currency code and the date. The date is necessary due to countries
shifting the decimal place leftwards due to inflation. E.g., inflation
in Venuzuela is around 1,500%pa at the moment. Ergo, the paper money
becomes worthless and if you are still using computers with 32 bit
integers you end up doing addition by hand.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Fuzzy" Matches on Nicknames

2016-11-29 Thread rob stone
Hello Michael,
On Tue, 2016-11-29 at 19:10 -0500, Michael Sheaver wrote:
> Greetings,
> 
> I have two tables that are populated using large datasets from
> disparate external systems, and I am trying to match records by
> customer name between these two tables. I do not have any
> authoritative key, such as customerID or nationalID, by which I can
> match them up, and I have found many cases where the same customer
> has different first names in the two datasets. A sampling of the
> differences is as follows:
> 
> Michael <=> Mike
> Tom <=> Thomas
> Liz <=> Elizabeth
> Margaret <=> Maggie
> 
> How can I build a query in PostgreSQL (v. 9.6) that will find
> possible matches like these on nicknames? My initial guess is that I
> would have to either find or build some sort of intermediary table
> that contains associated names like those above. Sometimes though,
> there will be more than matching pairs, like:
> 
> Jim <=> James <=> Jimmy <=> Jimmie
> Bill <=> Will <=> Willie <=> William
> 
> and so forth.
> 
> Has anyone used or developed PostgreSQL queries that will find
> matches like these? I am running all my database queries. on my local
> laptops (Win7 and macOS), so performance or uptime is no issue here.
> I am curious to see how others in this community have creatively
> solved this common problem.
> 
> One of the PostgreSQL dictionaries (synonym, thesaurus etc.) might
> work here, but honestly I am clueless as to how to set this up or use
> it in queries successfully.
> 
> Thanks,
> Michael (aka Mike, aka Mikey)
> 

Check out chapter F15 in the doco.
Try the double metaphone.
I worked on something similar many years ago cleaning up input created
by data entry clerks from hand written speeding tickets, so as to match
with "trusted" data held in a database.
As the volume of input was small in comparison with the number of
licensed drivers, we could iterate over and over again trying to match
it up.

HTH.
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query locks up when run concurrently

2016-11-24 Thread rob stone

On Thu, 2016-11-24 at 22:23 +0100, azhwkd wrote:
> It should not be possible because a group does not return to the
> update pool before the update hasn't finished.
> I watched the queries in a postgres client and there was no overlap I
> could see.
> I don't really know what to make from this behavior, sometimes when I
> start the application a few updates go through and eventually it will
> lock up completely and sometimes it locks up immediately - always
> with
> heap_hot_search_buffer using ~20 of all CPU time on the system.
> 
> 
Hello Sebastian,
You stated that the application is written using go (www.golang.org).
Are you able to run the application with the -race flag?

HTH,
Rob

>From wikipedia:-

Lack of race condition safety

There are no restrictions on how goroutines access shared data, making
race conditions possible. Specifically, unless a program explicitly
synchronizes via channels or other means, writes from one goroutine
might be partly, entirely, or not at all visible to another, often with
no guarantees about ordering of writes. Furthermore, Go's internal data
structures like interface values, slice headers, hash tables, and
string headers are not immune to race conditions, so type and memory
safety can be violated in multithreaded programs that modify shared
instances of those types without synchronization.

Instead of language support, safe concurrent programming thus relies on
conventions; for example, Chisnall recommends an idiom called "aliases
xor mutable", meaning that passing a mutable value (or pointer) over a
channel signals a transfer of ownership over the value to its receiver.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with moving tablespace

2016-11-17 Thread rob stone

> Bonus question: I found an ER diagram of some of the pg_* tables at h
> ttp://www.slideshare.net/oddbjorn/Get-to-know-PostgreSQL. Is there an
> ERD of all of them so a person can better understand how to use them
> when one must? I suppose the same question applies to
> information_schema since I probably should be using that over the
> pg_* tables when possible (and as the above example shows, sometimes
> you have to go look at the pg_* tables).
> 
> Thanks!
> Kevin
> 
> 

Hello,

ExecuteQuery has an ER diagram tool. You can download the jar file from
www.executequery.org and obtain the JDBC driver from the Postgres site.
You set up separate connections to all databases that you wish to
access.
It generates the ER diagram but prior to printing it you need to drag
and drop the "boxes" around to make it readable. I have not tried it
(yet) over information_schema.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Import data from MS SQL Server 2014 to Postgresql 9.6 using dbi-link and fdw (error: utf-8/uft-16)

2016-11-10 Thread rob stone

On Thu, 2016-11-10 at 05:34 -0500, Juliano wrote:
> Hi,
>  
> I'm trying to import some data from a MS SQL Server 2014 sequential
> database to Postgresql using dbi-link.
>  
> Postgresql 9.6 encoding is utf-8 and does not support utf-16 but, I
> need to transfer this data to postgres.
>  
> I also tried to use tds_fdw version 1.0.8 and the same problem
> occurs.
>  
> ERROR: DB-Library error: DB #: 100, DB Msg: TDS version downgraded to
> 7.1!, OS #: 0, OS Msg: Success, Level: 1
>  
> ** Error **
>  
> ERROR: DB-Library error: DB #: 100, DB Msg: TDS version downgraded to
> 7.1!, OS #: 0, OS Msg: Success, Level: 1
> SQL state: HV00L
>  
> Please help me.
> 


The error code HV00L means "unable to create execution".

This appears to have been fixed. See:-

https://github.com/tds-fdw/tds_fdw/issues/83


However, I don't follow "how" the UTF-16 data is being transformed to
UTF-8. I know that I'm thick (at times). If the SQL Server data is in
Simplified Chinese (say), how do you convert it?

Postgres handles some multi-byte codings:-

WIN932 --> SJIS
WIN936 --> GBK
WIN949 --> UHC
WIN950 --> BIG5

So you can run createdb specifying one of those encodings, lc_collate,
etc.


HTH,

Rob 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locking question

2016-10-26 Thread rob stone

On Wed, 2016-10-26 at 12:18 +0200, Frank Millman wrote:
>  
> From: hubert depesz lubaczewski
> Sent: Wednesday, October 26, 2016 10:46 AM
> To: Frank Millman
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Locking question
>  
> On Wed, Oct 26, 2016 at 10:42:29AM +0200, Frank Millman wrote:
> > Hi all
> > > 
> > > I am designing an inventory application, and I want to ensure
> that the stock level of any item cannot go negative.
> > > 
> [...]
>  
> 1. I am writing my application to support 3 databases – PostgreSQL,
> sqlite3, and MS SQL Server. Because they are all so different when it
> comes to triggers and procedures, I am trying to avoid using them,
> and do as much within the application as possible.
>  
> 2. I think you are suggesting maintaining a ‘balance’ column on
> inv_rec. This raises the question of whether or when you should
> create and maintain a column if the same information could be derived
> from other sources. I realise that this is a judgement call, and
> sometimes I struggle to get the balance right. Is this a situation
> where people would agree that it is warranted?
>  
> I would still appreciate some feedback as to whether my proposed
> solution would work.
>  
> Thanks
>  
> Frank
>  

Hello Frank,


Stock on-hand quantity = goods inwards  +
 goods returned from customers -
 goods invoiced  -
 goods returned to suppliers due to defects.

Available stock quantity = stock on-hand -
   goods ordered.

The hassle you have in a multi-user environment is when User "A" takes
an order from Customer "A" for Product "A" and at the same time User
"B" takes an order from Customer "B" also for Product "A".
Both users will quote the same value for quantity available (or
quantity on-hand if you don't keep track of orders).
Most companies rank their customers according to internal policy. For
example, customers who pay their bills late could be given a lower rank
than those that pay on time. Some customers won't accept partial
deliveries. Etc.
So, if you just take orders you can run a batch process to convert
orders into invoices and read your orders according to customer ranking
criteria.
By running a batch process, with a suitable "locking" mechanism to
avoid the process being run twice at the same time, the maintenance of
stock on-hand quantities is run in a single instance, you won't over
ship and it won't become negative.

I think this method will work on all three databases cited.

HTH,
Rob 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange? BETWEEN behaviour.

2016-10-20 Thread rob stone

On Thu, 2016-10-20 at 13:51 +0200, Bjørn T Johansen wrote:
> I have the following SQL:
> 
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> 23:59:59','DD.MM.
> HH24:MI:SS')
> 
> date is of type timestamp.
> 
> I was expecting to get all the records that had datepart = 20.10.2016
> but I am not getting that..
> 
> What am I missing?
> 
> 
> Regards,
> 
> BTJ
> 
> 


I assume that you can use full stops as date separators.

Anyway, this will work:-


SELECT * from table WHERE TO_CHAR(date,'MMDD') = '20161020';


HTH,
Rob 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question on replace function

2016-09-25 Thread rob stone

On Sun, 2016-09-25 at 10:29 +0200, Charles Clavadetscher wrote:
> Hello
> 
> I am using PostgreSQL 9.4.7 on x86_64-unknown-linux-gnu, compiled by
> gcc 
> (Debian 4.7.2-5) 4.7.2, 64-bit
> 
> I imported data from a MariaDB table into PostgreSQL and noticed
> that 
> the content of a field was not correct, but I was not able to change
> it. 
> The field is called vcard and is of datatye text.
> 
> The structure of the table:
> 
> roundcubemail=# \d contacts
> Tabelle „public.contacts“
> Spalte   |   Typ| 
> Attribute
> +--+-
> ---
>   contact_id | integer  | not null Vorgabewert 
> nextval(('contacts_seq'::text)::regclass)
>   changed| timestamp with time zone | not null Vorgabewert now()
>   del| smallint | not null Vorgabewert 0
>   name   | character varying(128)   | not null Vorgabewert 
> ''::character varying
>   email  | text | not null Vorgabewert
> ''::text
>   firstname  | character varying(128)   | not null Vorgabewert 
> ''::character varying
>   surname| character varying(128)   | not null Vorgabewert 
> ''::character varying
>   vcard  | text |
>   words  | text |
>   user_id| integer  | not null
> 
> The content of vcard looks as follows (replaced real names with 
> placeholders):
> 
> BEGIN:VCARD\r\\rVERSION:3.0\r\\rN:;Firstname Lastname 
> ;;;\r\\rFN:Firstname 
> Lastname\r\\rEMAIL;TYPE=INTERNET;TYPE=OTHER:email\r\\rEND:VCARD
> 
> My target is to replace all occurences of '\r\\r' with E'\r\n' to
> comply 
> with RFC 6350.
> 
> I tried using the function replace and I am faced with a strange 
> behaviour. If I use the function with a string as shown above I get
> the 
> expected result:
> 
> elect replace('BEGIN:VCARD\r\\rVERSION:3.0\r\\rN:;Firstname Lastname 
> ;;;\r\\rFN:Firstname 
> Lastname\r\\rEMAIL;TYPE=INTERNET;TYPE=OTHER:email\r\\rEND:VCARD','\r\
> \r',E'\r\n') 
> ;
> 
>  replace
> 
>   BEGIN:VCARD\r +
>   VERSION:3.0\r +
>   N:;Firstname Lastname ;;;\r   +
>   FN:Firstname Lastname\r   +
>   EMAIL;TYPE=INTERNET;TYPE=OTHER:email\r+
>   END:VCARD
> (1 row)
> 
> However, if I use the function on the vcard field nothing is
> replaced:
> 
> select replace(vcard,'\r\\r',E'\r\n') from contacts;
> 
>  
> replace
> ---
> ---
> 
>   BEGIN:VCARD\r\\rVERSION:3.0\r\\rN:;Firstname Lastname 
> ;;;\r\\rFN:Firstname 
> Lastname\r\\rEMAIL;TYPE=INTERNET;TYPE=OTHER:email\r\\rEND:VCARD
> (1 row)
> 
> Does anybody have an idea what I am doing wrong?
> Thank you for your help.
> 
> Charles
> 
> -- 
> Swiss PostgreSQL Users Group
> c/o Charles Clavadetscher
> Treasurer
> Motorenstrasse 18
> CH – 8005 Zürich
> 
> http://www.swisspug.org
> 
> +---+
> > 
> >     __  ___   |
> >  /)/  \/   \  |
> > ( / ___\) |
> >  \(/ o)  ( o)   ) |
> >   \_  (_  )   \ ) _/  |
> > \  /\_/\)/|
> >  \/ |
> >   _|  |   |
> >   \|_/|
> >   |
> > PostgreSQL 1996-2016  |
> >  20 Years of Success  |
> >   |
> +---+
> 
> 



Tested this on 9.6beta3 on a test database and it appears to work fine.

Inserted one row.

dinkumerp=> select * from contacts;
LOG:  duration: 0.571 ms  statement: select * from contacts;
 contact_id |changed| del | name | email |
firstname | s
urname
|  vcard  |
words
 | user_id 
+---+-+--+---+-
--+--
---+---
--+--
-+-
  1 | 2016-09-25 21:30:54.788442+10 |   0
|  |   |   |  
   | BEGIN:VCARD\r\\rVERSION:3.0\r\\rN:;Firstname
Lastname  +|  
 |
|   | |  |   | 
  |  
   |
;;;\r\\rFN:Firstname   +|  
 | 
|   | |  |   | 
  |  
   |
Lastname\r\\rEMAIL;TYPE=INTERNET;TYPE=OTHER:email\r\\rEND:VCARD |  
 | 
(1 row)

Replace select.

dinkumerp=> select replace(vcard,'\r\\r',E'\r\n') from contacts;
LOG:  duration: 0.400 ms  statement: select
replace(vcard,'\r\\r',E'\r\n') from contacts;
replace 

 BEGIN:VCARD\r +
 VERSION:3.0\r +
 N:;Firstname Lastname 

Re: [GENERAL] Passing varchar parameter to INTERVAL

2016-09-07 Thread rob stone
Hello,
On Wed, 2016-09-07 at 15:05 +0200, Alexander Farber wrote:
> Good afternoon,
> 
> when trying to create a custom function to temporary ban a user:
> 
> CREATE OR REPLACE FUNCTION words_ban_user(
>     IN in_uid integer,
>     IN in_until varchar,    -- '1 week' OR '1 month' OR '1 year'
>     IN in_reason varchar)
>     RETURNS void AS
> $func$
> BEGIN
> 
>     UPDATE words_users SET
>     banned_until  = CURRENT_TIMESTAMP + INTERVAL
> in_until,
>     banned_reason = in_reason,
>     vip_until = vip_until + INTERVAL in_until,    --
> for paying user
>     grand_until   = grand_until + INTERVAL in_until
>     WHERE uid = in_uid;
> 
> END
> $func$ LANGUAGE plpgsql;
> 
> in 9.5.4 I unfortunately get the error:
> 
> ERROR:  syntax error at or near "in_until"
> LINE 69: ...   banned_until = CURRENT_TIMESTAMP + INTERVAL
> in_until,
>    ^
> Is there please a better way here?
> 
> Thank you
> Alex
> 


I think the interval values need to be quoted.
In any event I always use the P or T syntax. 'P1D' means add one day,
etc.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgadmin4 rc1 query tool performance

2016-09-07 Thread rob stone
Hello,
On Tue, 2016-09-06 at 18:40 +0200, Attila Soki wrote:
> > 
> > Am 06.09.2016 um 15:23 schrieb Adrian Klaver  > .com>:
> > 
> > On 09/06/2016 02:35 AM, Attila Soki wrote:
> > > 
> > > Hi,
> > > 
> > > i testing the latest release of pgadmin4 (rc1) and noticed that
> > > the query tool is significantly slower than the query tool in
> > > pgadmin3.
> > > i am not sure if this occurs only on my computer or only under os
> > > x (10.10.5) or is this a known behavior.
> > 
> > I don't know. You might get an answer quicker at the link below
> > though:
> > 
> > https://www.postgresql.org/list/pgadmin-hackers/
> 
> sorry for the noise, i posted this accidentally here instead of
> pgadmin-support. i got already an answer there.
> 
> Attila
> 

IMO the answer is nonsensical.

I'd do the following:-

1) Alter your postgresql.conf to log every statement.
2) Restart the server.
3) Run your tests again.
4) Examine the postgresl.log file for the queries created and compare
them. Also, pgadmin might be querying the catalogues in an inefficient
manner thus leading to the increases in elapsed times.

My 2 cents worth.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restricted access on DataBases

2016-09-05 Thread rob stone
Hello,
On Mon, 2016-09-05 at 14:45 +0200, Durumdara wrote:
> Dear PG-masters!
> 
> We want to put more databases to one server, to "public" schema:
> DB_A, DB_B, DB_C.
> And users:
> US_A, US_B, US_C, and Main_Admin.
> We want to setup the environment.
> Every simple user can access his database:
> DB_A - US_A
> DB_B - US_B
> DB_C - US_C
> 
> They can't access other databases only theirs.
> 
> Main_Admin can access all databases.
> 
> I'm not sure how to do it perfectly.
> We tried to remove "public" role, and add US_A to DB_A.
> But the subobjects (table named "teszt") aren't accessable.
> 
> I can reown DB_A to US_A, but this revoke all rights from Main_Admin.
> 
> What is the simple way to we can avoid the access from another users,
> but give needed rights to DB_[n] and Main_Admin? (Tables, Sequences,
> etc).
> 

I'm not a pg_master.

Do you mean multiple databases or multiple schemas?
If you have multiple databases then main_admin would have to connect
and disconnect over and over in order to look at each database.

If you are setting this up for students to learn RDBMS skills, then
wouldn't you be better off with a single database instance containing
multiple schemas?

Create all the roles with the necessary privileges, then:-

CREATE SCHEMA db_a AUTHORIZATION us_a;

GRANT ALL ON ALL TABLES IN SCHEMA db_a TO main_admin;

Repeat as necessary for each schema/role combination.

Then main_admin would have to prefix all tables, sequences, etc. with
the schema name in order to run queries, etc.


HTH,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COL unique (CustomerID) plus COL unique (COUNT) inside CustomerID

2016-09-01 Thread rob stone

On Thu, 2016-09-01 at 12:28 +0300, Michelle Konzack wrote:
> Hello to all,
> 
> after a period of silence from Debian, Courier, PHP and PostgreSQL
> I  am
> half back and running into a problem...  :-/
> 
> I need a table with an UNIQUE CustomerID which is working fine...
> 
> ...BUT I need also a second column with a count,
> which  must  be  UNIQUE
> inside the CustomerID.
> 
> In clear this:
> 
> CustID   Count
>  1   1
>  1   2
>  1   3
> 
>  2   1
>  2   2
> 
>  3   1
>  3   2
>  3   3
>  ...
> 
> How to do this?
> 
> Thanks in avance
> 
> -- 
> Michelle KonzackITSystems
> GNU/Linux Developer 0033-6-61925193
> 
> 


Hi,

(I wouldn't name a column "count").


SELECT MAX(count_er) FROM my_customer_table WHERE cust_id = $1;

Add one to the result. Do INSERT.

As SELECT MAX is a GROUP function it will return zero if the cust_id
doesn't exist.

HTH,

Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] endash not a graphic character?

2016-08-20 Thread rob stone
Hello Bruno,
On Sat, 2016-08-20 at 14:04 -0500, Bruno Wolff III wrote:
> I was surprised to find endash and emdash were not graphic characters
> in 
> en_US. I'm not sure if this is correct behavior, a bug in postgres or
> a 
> bug in my OS' collation definitions?
> 
> For example:
> 
> Dash:
> area=> select '-' ~ '[[:graph:]]' collate "en_US";
>  ?column? 
> --
>  t
> (1 row)
> 
> Endash:
> area=> select '–' ~ '[[:graph:]]' collate "en_US";
>  ?column? 
> --
>  f
> (1 row)
> 
> 
> Emdash:
> area=> select '—' ~ '[[:graph:]]' collate "en_US";
>  ?column? 
> --
>  f
> (1 row)
> 
> 



You can't use  (emdash) or  (endash)?
Or their hex equivalents. See the Unicode chart.

HTH,
rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.2 to 9.5 pg_upgrade losing data

2016-08-15 Thread rob stone
Hello,
On Mon, 2016-08-15 at 08:45 -0700, Adrian Klaver wrote:
> On 08/15/2016 08:04 AM, Pete Fuller wrote:
> > 
> > We have not found any obvious differences, other than the size of
> > the
> > databases (~ 100 gig on the testing site vs 400 Gig in
> > production).  In
> > fact, the upgrade path seems to work with our production db when
> > testing
> > on our offsite replica that lives in our backup datacenter, near
> > identical hardware and setup with same install scripts.
> > 
> 
> Hmm, tried another cup of coffee, that did not work, I got nothing
> at 
> the moment:(
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 


Assuming you have enough spare disk capacity, have you not thought of
going from 9.2 to 9.3 and from there to 9.4, etc., instead of doing it
in one giant leap?

If the upgrade to 9.3 does not cause any hassles, you could at least
put that into production and then test out the 9.3 to 9.4 upgrade, and
so on.

Just my two cents worth.

Cheers,
rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] permissions PostgreSQL 9.5

2016-08-09 Thread rob stone

On Wed, 2016-08-10 at 13:33 +1200, Patrick B wrote:
> hi guys,
> 
> 
> just setting up a new DB using PostgreSQL 9.5.
> 
> 
> I've created a new username for the code, called codeuser.
> 
> To give the username access to all the tables, views, etc I ran:
> 
> > GRANT INSERT, SELECT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public
> > TO codeuser;
> 
> Is that ok? Is that enough? 
> 
> Thanks 

Patrick,

You also need to REVOKE INSERT/SELECT/UPDATE/DELETE ON SCHEMA public
FROM PUBLIC;

See section 5.8.4 in the docs.

You are much better off doing:-

CREATE ROLE codeuser . .. etc.

CREATE SCHEMA mynewapp AUTHORIZATION codeuser;

Then only codeuser can create tables etc. in that schema.
Set the search path in postgresql.conf to include the new schema.
You need to connect as codeuser.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.6beta3

2016-07-31 Thread rob stone
On Fri, 2016-07-29 at 12:15 -0400, Tom Lane wrote:
> rob stone <floripa...@gmail.com> writes:
> > So, could somebody tell me if the 9.6beta1 version of initdb
> > contained
> > an incorrect version constant?
> 
> It did, see
> 
> https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=99
> dd8b05aa5647a59f30ca67e67e2e3377f50094
> 
> You would have had to do an initdb or pg_upgrade for the new version
> anyway because of other catalog changes.
> 
>   regards, tom lane


Thanks for that. Back to the drawing board. Only testing the beta
versions so no real harm done.

Cheers,
rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.6beta3

2016-07-29 Thread rob stone
Hi,
Some weeks ago I downloaded 9.6beta1 source, compiled it and used
initdb to create a cluster, then psql to create roles, schema, etc.,
etc. I loaded some data into tables using COPY FROM.

Today I downloaded the 9.6beta3 source, compiled it and tried to start
postgres using pg_ctl but pulled the following error:-

"The database cluster was initialized with PG_CONTROL_VERSION 942, but
the server was compiled with PG_CONTROL_VERSION 960."

So, could somebody tell me if the 9.6beta1 version of initdb contained
an incorrect version constant?

PGDATA is definitely pointing to the correct path.

Thanks,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to control the location of the lock file when starting postgres?

2016-07-20 Thread rob stone
On Thu, 2016-07-21 at 00:12 +, Steve Langlois wrote:
> 
> > On Jul 20, 2016, at 7:48 PM, Steve Langlois  > om> wrote:
> > 
> > > you never did answer my previous question, why are you messing
> > > about with manually starting postgres from the wrong user
> > > account, when it should be run as a system service with systemctl
> > > ?
> > I am upgrading an existing system running CentOS 5.6 with Postgres
> > 8.2.5 to CentOS 7 with 9.2.15. The original system modified the
> > postgresql script to manually running postmaster to start the
> > database under the current user control. So it is really for
> > compatibility with the rest of the code. 
> > 
> > If local is used for unix domain socket connections do I change --
> > auth to --auth-local=ident for initdb?
> > 
> 
> I tried the following:
> 
> /usr/bin/initdb --pgdata=/usr/test/databases/pgsql1/data --auth-
> local=trust
> 
> changed unix_socket_directories = '/tmp’ in postgresql.conf.
> 
> Then ran:
> 
> /usr/bin/postgres -k /tmp -D /usr/tavve/databases/pgsql1/data &
> 
> Contents of pg_hba.conf is 
> 
> # TYPE  DATABASE        USER            ADDRESS                
> METHOD
> 
> # "local" is for Unix domain socket connections only
> local   all             all                                     trust
> # IPv4 local connections:
> host    all             all             127.0.0.1/32            trust
> # IPv6 local connections:
> host    all             all             ::1/128                 trust
> # Allow replication connections from localhost, by a user with the
> # replication privilege.
> #local   replication     slanglois                              
>  trust
> #host    replication     slanglois        127.0.0.1/32          
>  trust
> #host    replication     slanglois        ::1/128                
> trust
> 
> Then I try to run psql and 
> 
> [20:01:11 slanglois@tron bin]$ psql
> psql: could not connect to server: No such file or directory
> Is the server running locally and accepting
> connections on Unix domain socket
> "/var/run/postgresql/.s.PGSQL.5432”?
> 
> I would expect local all all trust to allow me to connect.



On my testing box this is my pg_hba.conf

local   all all trust
hostall all 127.0.0.1/32trust
hostall all ::1/128 trust

"local" is for psql, etc.
"host" is for JDBC connections

Connection and authentication settings in postgresql.conf

listen_addresses = 'localhost'
port = 5435
max_connections = 100
unix_socket_directories = '/var/run/postgresql'
#unix_socket_group = ''
unix_socket_permissions = 0770 


(I'm using port 5435 for 9.6beta testing)

psql does not start the database engine.

Whichever method you use to start it, that user (usually postgres) has
to own the /var/run/postgresql path and have write permissions.

E.g.:-

-rw-r--r-- 1 postgres postgres 5 Jul 21 03:57 9.6-main.pid


A similar set-up is used for 9.4 and 9.5. I haven't used 9.2 for some
years now, and as John pointed out, that version is nearing the end of
its maintenance cycle. However, I believe a similar set-up was used.
This is running on Debian.

HTH,

Rob




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving from PHP to Java: A result was returned when none was expected.

2016-06-16 Thread rob stone
On Wed, 2016-06-15 at 15:56 +0200, Alexander Farber wrote:
> Good afternoon,
> 
> at PostgreSQL 9.5.3 I have a stored function (full source code below)
> returning void, which I successfully call with PHP:
> 
> function skipGame($dbh, $uid, $gid) {
>     $sth = $dbh->prepare('SELECT words_skip_game(?, ?)');
>     $sth->execute(array($uid, $gid));
> }
> 
> Now I am trying to call the same function through JDBC driver
> 9.4.1208.jre7:
> 
>     private static final String SQL_SKIP_GAME   = 
>     "SELECT words_skip_game(?, ?)";
> 
>     try (PreparedStatement st =
> mDatabase.prepareStatement(SQL_SKIP_GAME)) {
>     st.setInt(1, mUid);
>     st.setInt(2, gid);
>     st.executeUpdate();
>     }
> 
> and sadly get the SQLException "A result was returned when none was
> expected.".
> 
> Shouldn't I call executeUpdate() method here - according to the doc
> https://www.postgresql.org/docs/7.4/static/jdbc-callproc.html  ?
> 
> Below is the stored procedure, thank you for any hints.
> Alex
> 
> CREATE OR REPLACE FUNCTION words_skip_game(
>     IN in_uid integer,
>     IN in_gid integer)
>     RETURNS void AS
> $func$
> BEGIN
>     UPDATE words_games
>     SET played1 = CURRENT_TIMESTAMP
>     WHERE gid = in_gid 
>     AND player1 = in_uid
>     /* and it is first player's turn */
>     AND (played1 IS NULL OR played1 < played2);
> 
>     IF NOT FOUND THEN
>     UPDATE words_games
>     SET played2 = CURRENT_TIMESTAMP
>     WHERE gid = in_gid 
>     AND player2 = in_uid
>     /* and it is second player's turn */
>     AND (played2 IS NULL OR played2 < played1);
>     END IF; 
> END
> $func$ LANGUAGE plpgsql;
> 

Aren't you supposed to use prepareCall?

Also 7.4 documentation is a tad out of date but the method is the same.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index seems "lost" after consecutive deletes

2016-06-13 Thread rob stone
On Mon, 2016-06-13 at 22:41 -0300, Edson Richter wrote:
> Em 13/06/2016 22:33, Edson Richter escreveu:
> > I've a table "A" with 4,000,000 records.
> > 
> > I've decided to delete records from oldest to newest but I can't 
> > delete records that have references in tables "B", "C" or "D".
> > 
> > 
> > so, I've
> > 
> > 
> > with qry as (
> > 
> > select A.id
> > 
> >   from A
> > 
> > where not exists (select 1 from B where B.a_id = A.id)
> > 
> >    and not exists (select 1 from C where C.a_id = A.id)
> > 
> >    and not exists (select 1 from D where D.a_id = A.id)
> > 
> >    and A.creation_date < (now()::date - interval '12 month')
> > 
> >   order by A.id DESC
> > 
> >   limit 2000
> > 
> > )
> > 
> > delete from A where id in (select id from qry);
> > 
> > 
> > All three referenced tables have indexes (B.a_id; C.a_id; D.a_id)
> > in 
> > order to make query faster.
> > 
> > So for first 2 million rows it worked really well, taking about 1 
> > minute to delete each group of 2000 records.
> > 
> > Then, after a while I just started to get errors like:
> > 
> > 
> > ERROR: update or delete in "A" violates foreign key "fk_C_A" in
> > "C".
> > 
> > DETAIL:  Key (id)=(3240124) is still referenced by table "C".
> > 
> > 
> > Seems to me that indexes got lost in the path - the query is
> > really 
> > specific and no "C" referenced records can be in my deletion.
> > 
> > Has anyone faced a behavior like this?
> > 
> > Am I doing something wrong?
> > 
> > 
> 
> Of course:
> Version stringPostgreSQL 9.4.8 on x86_64-unknown-linux-gnu,
> compiled 
> by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
> Oracle Linux 7 x64 with all updates. Running on EXT4 file system.
> Computer is Dell R420 with mirrored disks, 80GB of RAM (database has
> < 
> 40GB in total).
> 
> Sorry for not putting the info in the first e-mail.
> 
> Edson
> 
> 

What does:-

SELECT COUNT(*) FROM C WHERE C.a_id = 3240124;

return?

Is it a many-to-one or a one-to-one relationship?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automate copy - Postgres 9.2

2016-06-13 Thread rob stone
On Tue, 2016-06-14 at 13:33 +1200, Patrick B wrote:
> 
> 
> 2016-06-14 9:47 GMT+12:00 Adrian Klaver :
> > On 06/13/2016 02:42 PM, Patrick B wrote:
> > > Hi guys,
> > > 
> > > I created this function:
> > > 
> > >             CREATE or REPLACE FUNCTION function(account_id
> > > integer)
> > > 
> > >             RETURNS void AS $$
> > > 
> > >             begin
> > > 
> > >             execute 'COPY
> > > 
> > >             (
> > > 
> > >             SELECT * FROM backup_table WHERE account_id = ' ||
> > >             account_id || 'AND status = 1
> > > 
> > >             )
> > > 
> > >             TO ''/var/lib/pgsql/'||account_id||'.sql''';
> > > 
> > >             end
> > > 
> > >             $$ language 'plpgsql';
> > > 
> > > 
> > > 
> > > The function works fine.. but is not what I need, actually.
> > > The function above works by calling it specifying the account_id.
> > > For
> > > example:
> > > 
> > > You want to copy ( backup ) for the account_id number 63742:
> > > 
> > >     select function(63742);
> > > 
> > > 
> > > *What I need is:*
> > > 
> > > When calling the function, I have to specify the limit of
> > > account_ids to
> > > be copied. For example:
> > > To perform the commands in the function to 40 different
> > > account_ids:
> > > 
> > >     select function (40);
> > > 
> > > 
> > > 
> > > How can I do that? I can't...
> > > 
> >  
> > I believe this has been asked and answered, namely there needs to
> > be further information on how you want to determine the account ids
> > to be selected.
> > 
> 
> The account_ids can be random.. does not need to have an order, as
> they all will be copied some day.
> 
> There are more than 1.000.000 million rows in that backup table (
> attachments: as pictures, pdfs, etc ), and that's why I can't specify
> the account_Id manually.. and also need a limit, so the server won't
> stop while performing the COPY
> 
> > - Also, each file must have the account_id's name. Example for the
> > account_id = 124134
> > 124134.sql
> 
> Please, if you guys could give a help here.. 
> Cheers
> P.


1) COPY TO produces a text file and having a file created with a suffix
of SQL is slightly misleading. You can specify delimiter and
encapsulation characters. So, a suffix of csv is more appropriate.

2) Assuming backup_table is static and nobody is inserting or updating
data, you just need to read the rows and write out the million or so
individual files. If it's not static, you'll have to update (or delete)
the rows written to file, perhaps SET status = 2, so that you don't
write the same file multiple times.

3) If you are worried about throttling, put a pause into the program
looping through backup_table. You have to write a program to read
backup_table in order to supply your function with an account_id.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread rob stone
On Fri, 2016-06-10 at 13:01 -0700, Ken Tanzer wrote:
> Hi.  I was hoping this list might be able to offer some
> help/advice/suggestions/opinions about feasibility for something I
> want to implement, namely converting Postgres constraints into PHP
> logic.  Here's the context and explanation:
> 
> I work on a PHP web app using Postgres.  When possible, we try to
> build as much logic as possible directly into the DB.  The app
> already automatically reads NOT NULL and foreign key constraints from
> the DB, and enforces them through the UI thus preventing people from
> getting ugly database errors.  It doesn't do that with check
> constraints and table constraints though, which means we either end
> up duplicating the constraint logic in PHP, or else sometimes get
> lazy/expedient and only put the constraint into PHP.  Obviously
> neither of those is ideal.
> 
> What would be ideal is for the app to handle those constraints
> automatically.  It looks like I can pull them out (as SQL) from
> information_schema.check_constraints, with the remaining issue being
> how to make them usable in PHP.
> 
> I'm wondering if anyone has done this already, or if there is some
> kind of library available for this purpose?
> 
> If not, and absent any better suggestions, I'm looking at trying to
> parse/search/replace.  This might well be imperfect, and error-
> prone.  But if I can get something that at least works in a lot of
> cases, that would help a lot.  So as a simple example, converting
> from
> 
> ((sat3 >= 0) AND (sat3 <= 5))
> 
> to the needed format:
> 
> (($rec['sat3'] >=0) and ($rec['sat3'] <= 5))
> 
> seems like it would be relatively straightforward, since the
> structure of the logic is neatly parallel between SQL and PHP.  Each
> of these below, and many others I could pull, all have additional
> complications beyond that though:
>  (((incentive_sent_date IS NULL) OR (incentive_sent_date >=
> residence_date)))
> (((application_date IS NOT NULL) OR (employment_date IS NOT NULL)))
>  (xor((assessed_by IS NULL), (assessor_organization IS NULL)))
>  (((release_to IS NULL) OR ((consent_type_code)::text = ANY
> ((ARRAY['ROI'::character varying, 'MEDIA'::character
> varying])::text[]
> So anyway, there's my issue.  Any thoughts/comments/suggestions
> welcome.  Thanks in advance!
> 
> Ken
> 
> 
Hi Ken,

Would this be static or dynamic?
For example, if you altered a column to become defined as NOT NULL,
say, when you build the form used to maintain that table you'd like to
have a "required" attribute against the input field for that column. So
if it were dynamic you'd have to access the database information_schema
each time you wanted to send the HTML down the wire as well as when
carrying out validation.
Also, views are updateable and you'd have to check the tables and
columns making up the view.
I've never used pg_meta_data but it might be helpful.
Another thought would be to write a program that scanned all the tables
in your database and wrote out a table in your application's schema
where the table and column constraints were synthesised in a way
suitable for use in PHP code. 

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automate copy - Postgres 9.2

2016-06-09 Thread rob stone
On Wed, 2016-06-08 at 23:50 -0400, David G. Johnston wrote:
> 
> POn Wednesday, June 8, 2016, Patrick B 
> wrote:
> > > ERROR:  syntax error at or near "||"
> > > LINE 12:     TO '/var/lib/pgsql/' || crtrow.account_id |...
> > 
> 
> Which tells me you cannot use an expression as a file name.  The docs
> support this conclusion.
> 
> You probably need to use EXECUTE after constricting a string.
> 
> You could also try psql in a shell script.
> 
> David J. 


If this is something that you'll never ever have to do again in the
future, you could download ExecuteQuery which has a function to export
a result set as either XML or CSV delimited file output.

OTOH, if this is going to be something run on a regular basis, I think
you'd be better off writing a little program, or as David suggested
embedding this into a shell script.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioned tables do not return affected row counts to client

2016-06-02 Thread rob stone
On Thu, 2016-06-02 at 11:01 -0700, rverghese wrote:
> We are looking to move from one large table to partitioned tables.
> Since the
> inserts and updates are made to the master table and then inserted
> into the
> appropriate partitioned table based on the trigger rules, the
> affected_rows
> returned to the client (PHP in this case) is always 0. We have been
> using
> the affected_rows to check various things on the client end. Is there
> an
> alternative to check for rows affected with partitioned tables? Seems
> like a
> pretty big missing feature. 
> Thanks
> RV
> 
> 
> 
> --
> View this message in context: http://postgresql.nabble.com/Partitione
> d-tables-do-not-return-affected-row-counts-to-client-tp5906112.html
> Sent from the PostgreSQL - general mailing list archive at
> Nabble.com.
> 
> 

pg_affected_rows returns its value from the last SQL statement
executed.
So, if the last one was a "COMMIT" say, then it returns zero.


I guess you already know this.
Without knowing the complete circumstances, all I can suggest is
running a query using your partitioning rules to verify that the rows
were inserted into the correct partition.

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debian and Postgres

2016-05-05 Thread rob stone
Hello Adrian,On Thu, 2016-05-05 at 13:47 -0700, Adrian Klaver wrote:
> 
> Exactly. Showing the list the error you get when you cannot connect
> help 
> may with solving that problem and save you a great of time. What
> have 
> you got to lose?
> 

I have nothing to "lose".
There is NO error, per se. The progress bars just keep churning and
absolutely nothing happens. All you can do is cancel.
Nothing in the log files. No exceptions thrown. A black hole. I've
waited minutes to see if it can connect, but no. I tried running on the
9.4 cluster but the same thing. 

I appreciate all your suggestions.
Tomorrow I'll talk to a friend of mine who is a Java guru to see if it
is possible to force some kind of stack trace or something that will
provide a clue as to what is happening. If you can't find the driver
you'd expect a DriverManager exception to be thrown or maybe a
ClassNotFound. Not even this. It's the lack of any error message
anywhere that is frustating.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debian and Postgres

2016-05-05 Thread rob stone
Hello John,On Wed, 2016-05-04 at 21:43 -0700, John R Pierce wrote:
> On 5/4/2016 1:55 PM, rob stone wrote:
> > I can connect via psql and issue queries without any problems.
> > Trying
> > to connect via JDBC fails. Trying to connect by an application
> > fails.
>  
> one potential difference, psql will connect via a unix domain socket
> if you  don't specify a -h hostname, while JDBC can only connect via
> a tcp socket.  jdbc connecting to localhost will match `host`
> lines in the pg_hba.conf file, while psql connecting without a host
> specification will match `local` line(s).
> 
> so, please show us your jdbc connection string, and your psql command
> line, and also show us your pg_hba.conf file.
> 

Yes, I'm aware of the difference.
The pg_hba.conf file has a date of Feb 24, 2016. This problem occurred
due to me deleting packages via synaptic a few days ago.
I've also just discovered that the resolv.conf file was wacked. So it
has to have something to do with the network stack. No clues in syslog,
just the message in the Postgres log about not starting autovacuum when
it is supposed to start that process and always has up until now.

Cheers,
rob 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debian and Postgres

2016-05-05 Thread rob stone
Hello Adrian,On Wed, 2016-05-04 at 21:08 -0700, Adrian Klaver wrote:
> 
> So Debian does not rotate the logs into history.log..gz?
> 

Yes, it does! Didn't realise it. You learn something every day.

23 packages removed and 31 purged.

Going thru the list slowly.

Thanks,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debian and Postgres

2016-05-04 Thread rob stone
On Wed, 2016-05-04 at 16:50 -0700, Adrian Klaver wrote:
> On 05/04/2016 04:38 PM, rob stone wrote:
> > 
> > 
> Hmm, it is Ubuntu not Debian, but:
> 
> sudo apt-get remove whois
> 
> vi /var/log/dpkg.log
> 
> 2016-05-04 16:42:39 status installed whois:amd64 5.1.1
> 2016-05-04 16:42:39 remove whois:amd64 5.1.1 
> 2016-05-04 16:42:39 status half-configured whois:amd64 5.1.1
> 2016-05-04 16:42:39 status half-installed whois:amd64 5.1.1
> 2016-05-04 16:42:39 status triggers-pending man-db:amd64 2.6.7.1-
> 1ubuntu1
> 2016-05-04 16:42:39 status config-files whois:amd64 5.1.1
> 2016-05-04 16:42:39 status config-files whois:amd64 5.1.1
> 2016-05-04 16:42:39 status config-files whois:amd64 5.1.1
> 2016-05-04 16:42:39 status not-installed whois:amd64 
> 
> 
> vi /var/log/apt/history.log
> 
> Start-Date: 2016-05-04  16:42:39
> Commandline: apt-get remove whois
> Remove: whois:amd64 (5.1.1)
> End-Date: 2016-05-04  16:42:39
> 

Hello Adrian,

I read Tom's post. There is no firewall. Stand alone box.

The apt history.log is overwritten each time.
There is a term.log (full of Control-M's) which lists all the activity,
including the removals, as follows:-

liblept4 replaced by liblept5 and is used for image processing.
liblwres90 replaced by liblwres141 used by BIND
libpcrecpp0 replaced by libcrecpp0v5 used for Perl Regex
libsodium13 replaced by libsodium18 NaCl library
tzdata-java now gone for good, and
libdbus2.0-cil removed from Mono -- not mono -- as in mono-runtime,
etc.

So, liblwres and libsodium are the two obvious culprits. I'll have to
obtain the sources and diff the files to see if something contained in
the earlier version is missing or altered in the latest version.

I have also gone thru the major packages that have anything to do with
ports, sockets, etc. such as network-manager, isc-dhcp-client, doing an
"apt show" to list dependencies, then verifying that those dependencies
are the correct versions. So far libreadline6 and libdns-export162 were
earlier versions and libteam-utils was missing.

So, I'll let you know what else I find.

Thanks for the help.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debian and Postgres

2016-05-04 Thread rob stone
On Wed, 2016-05-04 at 14:51 -0700, Adrian Klaver wrote:
> 
> > I can connect via psql and issue queries without any problems.
> > Trying
> > to connect via JDBC fails. Trying to connect by an application
> > fails.
> Are you using the same connection parameters?
> 
> In particular are you using local for the psql connection and some
> form 
> of host for the others?
> 
> I ask because this looks somewhat similar to this thread:
> 
> http://www.postgresql.org/message-id/CAM3xazWDA6asEDDYHcKF_5oSFP4SZj8
> taVHwSF68wM=vmy7...@mail.gmail.com
> 
> where the solution:
> 
> http://www.postgresql.org/message-id/CAM3xazWDA6asEDDYHcKF_5oSFP4SZj8
> taVHwSF68wM=vmy7...@mail.gmail.com
> 
> was:
> 
> "That was indeed the root cause. The /etc/hosts file on the server
> had
> incorrect permissions which caused localhost to not resolve."
> 
> 

/etc/hosts has a file date of Dec 19 2014.

Everything worked fine yesterday. Absolutely nothing has been altered
except some packages have been removed and none of the log files can
give me a clue as to which ones.

I'll just have to continue hunting around trying to figure out what I
did and probably file a bug report against synaptic for losing the
removal info from its history logs.

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Debian and Postgres

2016-05-04 Thread rob stone
Hello,

This is a Debian problem that has caused a problem starting Postgres.
So, I'm posting it here in the hope that somebody has experienced it
previously and can assist me in fixing it.

Yesterday, it started up as per normal and first command issued via
JDBC driver was run:-

2016-05-04 10:39:39 AESTLOG:  MultiXact member wraparound protections
are now enabled
2016-05-04 10:39:39 AESTLOG:  database system is ready to accept
connections
2016-05-04 10:39:39 AESTLOG:  autovacuum launcher started
2016-05-04 10:40:00 AESTLOG:  duration: 224.906 ms  parse :
SET extra_float_digits = 3


Today, the following was in the log:-

2016-05-05 03:44:53 AESTLOG:  test message did not get through on
socket for statistics collector
2016-05-05 03:44:53 AESTLOG:  disabling statistics collector for lack
of working socket
2016-05-05 03:44:53 AESTWARNING:  autovacuum not started because of
misconfiguration
2016-05-05 03:44:53 AESTHINT:  Enable the "track_counts" option.
2016-05-05 03:44:53 AESTLOG:  database system was shut down at 2016-05-
04 11:56:37 AEST
2016-05-05 03:44:54 AESTLOG:  MultiXact member wraparound protections
are now enabled
2016-05-05 03:44:54 AESTLOG:  database system is ready to accept
connections
2016-05-05 03:58:29 AESTLOG:  duration: 787.241 ms  statement: select
count(*) from boiler_plate;


Relative section of conf file unchanged for months:-

# - Query/Index Statistics Collector -

#track_activities = on
track_counts = on
#track_io_timing = off
#track_functions = none # none, pl, all
#track_activity_query_size = 1024   # (change requires restart)
stats_temp_directory = 'pg_stat_tmp'


# - Statistics Monitoring -

#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off


I can connect via psql and issue queries without any problems. Trying
to connect via JDBC fails. Trying to connect by an application fails.

I use synaptic for package management. Yesterday I was running short on
space and removed some packages described as *obsolete* by synaptic.
My guess is that a dependency chain is incorrect and a certain package
is not in fact obsolete.
For some reason the synaptic history log only shows tzdata-java as
being removed but in fact several libraries and other packages were
permanently removed. The dpkg logs only show installation details.
I don't know why synaptic failed to record the removals in its history
logs as if it had, then I could trawl thru the logs and restore the
missing packages.
The apt logs do not list removals either.

Versions:-

Java JDK version 1.8.0_91
Linux [ 4.5.0-1-amd64 ]
JDBC 9.4-1208
Postgres psql (9.5.2)
libpq5_9.5.2-1_amd64.deb


Any assistance appreciated.

Thanks,
Rob




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-06 Thread rob stone
On Wed, 2016-04-06 at 10:33 +, Albe Laurenz wrote:


> Every row has two system columns associated with it: xmin and xmax
> 
> xmin is the transaction ID that created the row, while xmax is
> the transaction ID that removed the row.
> 
> So when an update takes place, xmax of the original row and xmin
> of the new row are set to the transaction ID of the current
> transaction.
> 
> 
> Yours,
> Laurenz Albe
> 


Hello Laurenz,

Out of curiosity:-
1) If you only updated a column defined as BOOLEAN, DATE or TIMESTAMP,
the tuples physical size should be the same. Is it written back to the
same page with altered xmin, xmax values?

2) If you defined a table with a FILLFACTOR of say 70%, and there is
still space left on its page for the updated tuple, does the same
apply?

I am curious because of "while xmax is the transaction ID that
*removed* the row".

Not urgent.

Cheers,
rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] More correlated (?) index woes

2016-03-28 Thread rob stone
On Mon, 2016-03-28 at 20:23 +0100, Geoff Winkless wrote:
> So I accept that when using MIN(sc_id) against scdate it makes
> statistical sense to use the sc_id index for a reasonable percentage
> of the full range of scdate, unless we know in advance that scdate is
> closely correlated to sc_id (because using MIN means we can stop
> immediately we hit a value).
> 
> However I'm now finding a similar problem when using a multi-table
> DELETE, where the same obviously can't apply.
> 
> This query:
> 
> DELETE FROM pa USING legs WHERE legs.scdate BETWEEN 20160220 AND
> 20160222 AND legs.sc_id=pa.sc_id;
> 
> does what one would hope:
> 
>  Delete on pa  (cost=0.99..705406.24 rows=36570 width=12)
>    ->  Nested Loop  (cost=0.99..705406.24 rows=36570 width=12)
>          ->  Index Scan using legs_scdate_idx on legs
>  (cost=0.43..1171.88 rows=14458 width=10)
>                Index Cond: ((scdate >= 20160220) AND (scdate <=
> 20160222))
>          ->  Index Scan using pa_pkey on pa  (cost=0.56..48.33
> rows=38 width=10)
>                Index Cond: (sc_id = legs.sc_id)
> 
> 
> However as soon as I add an extra test for field1 IS NULL, it
> apparently goes insane:
> 
>  Delete on pa  (cost=577260.90..626296.46 rows=23732 width=12)
> (actual time=41870.770..41870.770 rows=0 loops=1)
>    ->  Hash Join  (cost=577260.90..626296.46 rows=23732 width=12)
> (actual time=37886.396..41315.668 rows=44960 loops=1)
>          Hash Cond: (legs.sc_id = pa.sc_id)
>          ->  Index Scan using legs_scdate_idx on legs
>  (cost=0.43..1171.88 rows=14458 width=10) (actual time=0.030..13.667
> rows=21281 loops=1)
>                Index Cond: ((scdate >= 20160220) AND (scdate <=
> 20160222))
>          ->  Hash  (cost=481691.12..481691.12 rows=5497868 width=10)
> (actual time=37805.756..37805.756 rows=4875870 loops=1)
>                Buckets: 131072  Batches: 64  Memory Usage: 4311kB
>                ->  Seq Scan on pa  (cost=0.00..481691.12 rows=5497868
> width=10) (actual time=0.008..35869.304 rows=4875870 loops=1)
>                      Filter: (field1 IS NULL)
>                      Rows Removed by Filter: 2688634
>  Planning time: 0.447 ms
>  Execution time: 41870.832 ms
> 
> Running ANALYZE makes no difference.
> 
> Table pa has 7522676 rows, 4834042 of which have field1 NULL, so it's
> absolutely not reasonable to expect this to be an optimal strategy.
> 
> Any suggestions as to how I can improve this query?
> 
> Thanks :)
> 
> Geoff



What does:-

DELETE FROM pa 
WHERE pa.field1 IS NULL
AND pa.sc_id IN (SELECT legs.sc_id FROM legs
WHERE legs.scdate BETWEEN 20160220 AND > 20160222)

give as a cost when you run ANALYZE over it?

HTH

Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] spurious /dev/shm related errors on insert

2016-03-18 Thread rob stone
On Fri, 2016-03-18 at 10:45 -0400, Michael Charnoky wrote:
> I should have noted: the application is using PostgreSQL 9.5.0,
> running on Ubuntu 14.04
> 
> Mike
> 
> On Fri, Mar 18, 2016 at 10:41 AM, Michael Charnoky 
> wrote:
> > Hi, I'm seeing random errors from an application that is performing
> > DB inserts. The error happens spuriously and looks like this from
> > the application side:
> > 
> > could not open file "/dev/shm/postgres_apptable_47861701461760" for
> > reading: No such file or directory
> > 
> > The PostgreSQL logs show:
> > 
> > 2016-03-18 07:25:01 UTC ERROR:  could not open file
> > "/dev/shm/postgres_apptable_47861701461760" for reading: No such
> > file or directory
> > 2016-03-18 07:25:01 UTC STATEMENT:  COPY urltable FROM
> > '/dev/shm/postgres_apptable_47861701461760' USING DELIMITERS '#'
> > WITH NULL AS '\null';
> > 
> > Any clues? I couldn't find any similar issues reported by other
> > users. Thanks!
> > 
> > Mike
> > 
> > 


Running 9.5.1 installed from Debian packages, when Postgres is started
it creates this file in /dev/shm


-rw--- 1 postgres postgres 2316 Mar 19 05:24 PostgreSQL.1804289383

It's binary.

Bye.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plan to support predicate push-down into subqueries with aggregates?

2016-03-10 Thread rob stone
On Thu, 2016-03-10 at 07:09 -0500, Adam Brusselback wrote:
> Rob,
> I understand that if I were to replicate the logic in that view for
> every use case I had for those totals, this would not be an issue.
> But that would very much complicate some of my queries to the point
> of absurdity if I wanted to write them in a way which would push
> everything down properly.  The issue is, that I need that data to be
> able to join to that view from other smaller sets of data and not
> have it calculate the totals for every "header" I have in my system,
> just to throw away 99% of them.
> 
> My application is for contract and rebate management, so as David
> said, basically accounting.  We allow users to set up contracts to
> sell products to their customers, and then track and verify the
> rebates they get back are correct, and that they're not overpaying.  
> The equivalent of the header_total view is used in quite a few
> different places in my application.  In one case, the state of one
> object in my system (called a deduction) is derived from 5 different
> data points, 2 of which are from my equivalent of the "header total"
> view.  Things like the state for the deduction object are derived
> from a query that I encapsulated inside a view.  You can see how this
> proliferates.
> 


Hello Adam,

With regards to proliferation, I guess it depends on whether you have
multiple views or multiple tables and which is best suited to your
application. If you have a table of customers all you need is a single
table of transactions where one of the columns is a foreign key
pointing to the customer. Want to know a customer's outstanding balance
-- sum their transactions. You wouldn't have a table just for invoices,
one for credit notes, one for discounts given, one for payments, etc.,
etc. The same goes if you want to know the stock on-hand and stock
available quantities for a given product/warehouse. Do it all via a
view or views, as appropriate. Have a column in the table
transaction_type CHAR(1) NOT NULL so you can put a CASE statement
across it so as to know whether to add or subtract. 

In your test case example, if you allow alterations to the values in
the columns rate, quantity or amount, you need triggers in order to
maintain database integrity. Then if somebody with the appropriate
privileges does "ALTER TABLE blah DISABLE ALL TRIGGERS;" your database
integrity goes out the window. 

All of the above is completely off your point. I just like to keep it
as straight forward as possible. 

My apologies for having a bit of a rant.

Regards,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plan to support predicate push-down into subqueries with aggregates?

2016-03-09 Thread rob stone
On Wed, 2016-03-09 at 12:25 -0500, Adam Brusselback wrote:
> I responded yesterday, but it seems to have gotten caught up because
> it was too big with the attachments... Here it is again.
> 
> Sorry about not posting correctly, hopefully I did it right this
> time.
> 
> So I wanted to see if Sql Server (2014) could handle this type of
> query differently than Postgres (9.5.1), so I got an instance of
> express installed and ported the test script to it.
> 
> I updated my Postgres script so the data is the same in each server. 
> The end result is Sql Server seems to be able to optimize all of
> these queries MUCH better than Postgres.
> I disabled parallelism in Sql Server to make the comparison fair.
> 
> I've attached the explain analyze results for Postgres, and the
> execution plan for Sql Server (in picture form... don't know a better
> way)
> 
> Results are:
> --Sql Server:15ms average
> --Postgres: 6ms average
> SELECT *
> FROM header
> INNER JOIN header_total
> ON header.header_id = header_total.header_id
> WHERE header.header_id = 26;
> 
> 
> --Sql Server: 15ms average
> --Postgres: 1250ms average
> SELECT *
> FROM header
> INNER JOIN header_total
> ON header.header_id = header_total.header_id
> WHERE header.header_id < 27
> AND header.header_id > 24;
> 
> 
> --Sql Server: 567ms average
> --Postgres: 1265ms average
> SELECT *
> FROM header
> INNER JOIN header_total
> ON header.header_id = header_total.header_id
> WHERE header.description like '%5%';
> 
> 
> --Sql Server: 15ms average
> --Postgres: 1252ms average
> SELECT *
> FROM header_total
> WHERE header_total.header_id IN (
> SELECT header_id
> FROM header
> WHERE header.header_id < 27
> AND header.header_id > 24);
> 


Hello Adam,

I can't believe that something originating from the dark side can do
things better. So I went thru your test case. Running this on my trusty
Compaq Presario with 5Gb of RAM and using Postgres 9.5.1.

1) Created your tables but re-named the column "description" as
"header_description". header has 1,000 rows, detail_1 has 400,391 rows
and detail_2 has 3,451 rows.
2) altered your index definitions to make them unique by including
their primary keys. (I have a penchant for unique indexes).
3) Ran the ANALYZE.
4) Shut down the database so as to clear the cache.
5) A while later started the database and created this view:-

CREATE OR REPLACE VIEW header_total AS
SELECT header.header_id, header.header_description,
amount_1, detail_1_count,
amount_2, detail_2_count
FROM header
LEFT JOIN (
SELECT header_id, SUM(rate * quantity) AS amount_1, COUNT(detail_1_id)
AS detail_1_count
FROM detail_1
GROUP BY detail_1.header_id
) detail_1
ON header.header_id = detail_1.header_id
LEFT JOIN (
SELECT header_id, SUM(amount) AS amount_2, COUNT(detail_2_id) AS
detail_2_count
FROM detail_2
GROUP BY detail_2.header_id
) detail_2
ON header.header_id = detail_2.header_id

Note that:- (a) I included header_description in the definition.
    (b) Removed some lines as if you want the total you may as well
include it in your select from the view, and for the life of me I
couldn't understand the purpose of:-

header.amount = coalesce(detail_1.amount, 0) +
coalesce(detail_2.amount, 0) as balanced

Is "balanced" supposed to be a boolean?

If you need header.amount include it in the view.


6) Ran your three queries and here are the timings from the log:-

2016-03-10 13:07:47 AEDTLOG:  duration: 0.221 ms  parse :
SELECT * FROM header_total WHERE header_id = 26
2016-03-10 13:07:47 AEDTLOG:  duration: 0.551 ms  bind :
SELECT * FROM header_total WHERE header_id = 26
2016-03-10 13:07:47 AEDTLOG:  duration: 1.103 ms  execute :
SELECT * FROM header_total WHERE header_id = 26
2016-03-10 13:07:54 AEDTLOG:  duration: 0.180 ms  parse :
SELECT * FROM header_total WHERE header_id < 200
2016-03-10 13:07:54 AEDTLOG:  duration: 0.481 ms  bind :
SELECT * FROM header_total WHERE header_id < 200
2016-03-10 13:07:55 AEDTLOG:  duration: 458.418 ms  execute :
SELECT * FROM header_total WHERE header_id < 200
2016-03-10 13:08:01 AEDTLOG:  duration: 0.230 ms  parse :
SELECT * FROM header_total WHERE header_description like '%5%'
2016-03-10 13:08:01 AEDTLOG:  duration: 0.542 ms  bind :
SELECT * FROM header_total WHERE header_description like '%5%'
2016-03-10 13:08:01 AEDTLOG:  duration: 459.346 ms  execute :
SELECT * FROM header_total WHERE header_description like '%5%'

I don't believe that (0.230 ms + 0.542 ms + 459.346 ms) could be
described as "slow" when it returns 271 rows.


Obviously it would help if there were more details about your
application. Also creating the test data via those bulk inserts doesn't
replicate any randomness that may occur via inserts made by an
application.

BTW, I'm a great fan of using views. 

HTH,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inserting JSON via Java PreparedStatment

2016-03-07 Thread rob stone
On Mon, 2016-03-07 at 09:32 -0700, David G. Johnston wrote:
> 
> > 
> > Hint: You will need to rewrite or cast the expression.
> > 
> Take the hint, literally.  You never did show the SQL but usually the
> least complex way to solve this is to indeed transfer the data as a
> string/text and then instruction PostgreSQL to convert (i.e., cast)
> it to json.
> 
> SELECT (?)::json;  <-- that ? parameter is seen as text; then you
> convert it.  The parentheses should be optional but I use them to
> emphasize the point.
> 
> then
> 
> pStmt.setString(1, dtlRec.toString());
> 
> David J.
> 

For some reason there is no java.sql.Type = JSON. There is ARRAY
though.

I would have written this:-

JsonObject mbrLogRec = Json.createObjectBuilder().build();
mbrLogRec = Json.createObjectBuilder()
.add("New MbrID", newId)
.build();

as

JsonObject mbrLogRec = Json.createObjectBuilder().add("New MbrID",
newId);

pStmt.setObject(11, mbrLogRec);

If you pass a string to your prepared statement and want to cast it in
your INSERT/UPDATE statement, you will probably have to include the
double quotes, colons and commas. Never tried it, just a guess.
Could become complicated when you have multiple pairs of JSON
attributes.

E.g.

JsonObject mbrLogRec = Json.createObjectBuilder().add("New MbrID",
newId).add("Old MbrID","fred");


I'm sorry but I don't have time at the moment to knock up a test
program and verify any of this. I'm not an expert on JSON objects in
Java.

Just my two bob's worth.

HTH,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Case and accent insensitive

2016-01-26 Thread rob stone
On Tue, 2016-01-26 at 11:06 +, Max wrote:
> Is there a collation for case and accent insensitive filter? I just
> found the use of operators like ILIKE ou CITEXT module but I can't
> use ILIKE in the queries and the CITEXT is a field-by-field solution
> for the case insensitive part of the problem. I also found
> the unaccent text search dictionary that could help with the accent
> part.
> 
> In the MySQL world we have the database level
> collation latin1_swedish_ci where 'maca' = 'Maçã' is true and no
> field level configuration is required.
> 
> This database is hosted in Amazon Cloud so I'm also limited by what I
> can do in this environment.


Have you considered storing the text in its HTML equivalent?

So "ma" becomes "apple" in Portuguese.

You'd have to convert your query parameters before running the select.

My tuppence worth.

HTH
Robert


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A motion

2016-01-24 Thread rob stone
On Sun, 2016-01-24 at 17:27 -0500, Dane Foster wrote:
> 
> On Sun, Jan 24, 2016 at 4:00 PM, bret_stern  ment.com> wrote:
> > Adrian,
> > I hope you reconsider. You have far more value to the list.
> > The CoC dictators will flame out, then where will we be.
> > Just sit on the sidelines until the show is
> > over.
> > Look forward to the next awesome year.
> > My CoC: "keep it technical"
> > 
> > Fore
> > 
> 
> +1 To Adrian sticking around. I'm relatively new to participating on
> this list and PostgreSQL in general and you've been extremely helpful
> to me personally in answering questions I've raised and providing
> guidance/suggestions. I'm no fan of the CoC conversation either so I
> scan then delete and go on w/ my day. It's a strategy that is working
> for me and I hope you will adopt it and stick around. Newbies like
> myself need people like on this list.
> 
> Regards,
> 
> Dane
>  


Plus one.
It's supposed to be a technical list.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL GUI tools

2016-01-04 Thread rob stone
On Sun, 2016-01-03 at 03:15 -0700, subhan alimy wrote:
> Hello everyone,
> 
> Is there any PostgreSQL GUI tools to pull the tables structure along
> with
> the relationships?
> 
> 
> Much appropriated.  
> 
> 

If you are a Java shop you can use ExecuteQuery from www.executequery.o
rg

You need to install the relevant JDBC drivers for the database you
connect to.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Options for complex materialized views sharing most of the same logic?

2015-12-28 Thread rob stone
On Sun, 2015-12-27 at 15:39 -0600, Wells Oliver wrote:
> I then have four different views which do largely the same thing,
> totaling the days four different ways:
> 
> 1) by last 10
> 2) by the last 30
> 3) by the last 60
> 4) by the entire year
> 
> Each of these views is basically a copy of one another for 99% of the
> code (the summing, percentages, etc). The only differences are:
> 
> 1) checks the days_back <= 10
> 2) checks days_back <= 30
> 3) checks days_back <= 60
> 4) does not check days_back
> 

Hello, I do not follow why you have four views and presumably run four
separate selects.
Assuming one of the base tables to the view has a column containing a
timestamp, why can't you define the column days_back in the view along
the lines of:-

select ((extract(epoch from current_date) - extract(epoch from
tables_date_column)) / (24 * 60 * 60)) as days_back

Then you can test days_back for the appropriate values using CASE, do
the calculations, and end up running a single select query.

Don't know if this helps. 

Cheers,
Rob  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Permissions, "soft read failure" - wishful thinking?

2015-12-14 Thread rob stone
On Mon, 2015-12-14 at 09:55 -0800, Benjamin Smith wrote:
> Is there a way to set PG field-level read permissions so that a deny
> doesn't 
> cause the query to bomb, but the fields for which permission is
> denied to be 
> nullified? 
> 
> In our web-based app, we have a request to implement granular
> permissions: 
> table/field level permissions. EG: userX can't read
> customers.socialsecurity in 
> any circumstance. We'd like to implement DB-level permissions; so
> far, we've 
> been using an ORM to manage CRUD permissions. 
> 
> This is old hat, but our system has a large number of complex queries
> that 
> immediately break if *any* field permission fails. So, implementing
> this for 
> customers could be *very* painful 
> 
> Is that there is a way to let the query succeed, but nullify any
> fields where 
> read permissions fail? (crossing fingers) We'd be watching the PG
> logs to 
> identify problem queries in this case. 
> 
> 


This is something one usually addresses during the design phase.

You need some sort of mechanism to categorise the users of the app.
Usually a many to one relationship where a bunch of users all share the
same set of privileges.
Then there's your forms. Always display a label "Social Security No."
but leave the field blank, or substitute "Supplied" when it exists or
"Not Supplied" when it is null? You can achieve that via views but you
need a separate view for each category of user and the app needs to
call the select from the appropriate view which means altering the app
after solving how to categorise the users.
You can also create a huge matrix of table_name.column_name accessible
by user category which means you app has to read this first in order to
dynamically build the select list prior to running it. This makes your
forms dynamic as well as you need to pull the labels from somewhere,
input type, etc. all adding I/O time to your DB server.

You need to consider your classes and data factories. You don't want
multiple copies of getter's and setter's for the same table columns.

If there are only a tiny number of restrictions -- such as for a
customer you want to restrict reading social security number and their
credit card details -- you could split this data off into child tables
and disallow drill-down access based upon user category.

Happy to discuss this further off list.

HTH,

Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC and inet type

2015-12-04 Thread rob stone
On Fri, 2015-12-04 at 09:41 +, Tim Smith wrote:
> Hi,
> 
> When I use "preparedStatement.setString(5,ip);" to send values to a
> stored function, it obviously gets sent to postgres as "character
> varying".
> 
> Postgres obviously complains loudly and says " Hint: No function
> matches the given name and argument types. You might need to add
> explicit type casts.".
> 
> What is the appropriate workaround ?
> 
> Thanks!
> 
> Tim
> 
> 

"preparedStatement.setString(5,ip);" will set parameter number five to
the string held in the variable.
That's what setString does. It expects a string to be passed.
Your function's fifth IN argument needs to be defined as a "string".
Within the function you will have to cast it to inet.

HTH,

Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] AccessExclusiveLock on tuple?

2015-12-02 Thread rob stone
On Wed, 2015-12-02 at 10:47 -0800, Christophe Pettus wrote:
> On Dec 2, 2015, at 10:29 AM, Tom Lane  wrote:
> > The short answer is that heavyweight tuple locks can be taken
> > internally
> > by UPDATE, DELETE, and other row-level operations, and the
> > specifics of
> > which type of lock any given action takes are implementation
> > details that
> > can change from time to time.
> 
> Great, thank you!
> 
> --
> -- Christophe Pettus
>    x...@thebuild.com
> 
> 
> 


Don't know if this helps at all. On the hot standby doco.

http://www.postgresql.org/docs/9.4/static/hot-standby.html

About two-thirds of the way down it refers to the pg_locks table.

Are you using "hot standby"?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple word synonyms (maybe?)

2015-10-20 Thread rob stone
On Tue, 2015-10-20 at 19:35 +0900, Tim van der Linden wrote:
> Hi All
> 
> I have a question regarding PostgreSQL's full text capabilities and
> (presumably) the synonym dictionary.
> 
> I'm currently implementing FTS on a medical themed setup which uses
> domain specific jargon to denote a bunch of stuff. A specific request
> I wish to implement here are the jargon synonyms that are heavily
> used.
> 
> Of course, I can simply go ahead and create my own synonym dictionary
> with a jargon specific synonym file to feed it. However, most of the
> synonyms are comprised out of more then a single word. 
> 
> The term "heart attack" for example has the following "synonyms":
> 
> - Acute MI
> - MI
> - Myocardial infarction
> 
> As far as I understand it, the tokenizer within PostgreSQL FTS engine
> splits words on spaces to generate tokens which are then proposed to
> each dictionary. I think it is therefor impossible to have "multi-
> word synonyms" in this sense as multiple words cannot reach the
> dictionary. The term "heart attack" would be presented as the tokens
> "heart" and "attack".
> 
> From a technical standpoint I understand FTS is about looking at
> individual words and lexemizing them ... yet from a natural language
> lookup perspective you still wish to tie "Heart attack" to "Acute MI"
> so when a client search on one, the other will turn up as well.
> 
> Should I write my own tokenizer to catch all these words and present
> them as a single token? Or is this completely outside the realm of
> FTS (or FTS within Postgresql)?
> 
> Cheers,
> Tim
> 
> 


Looking at this from an entirely different perspective, why are you not
using ICD codes to identify patient events?
It is a one to many relationship between patient and their events
identified by the relevant ICD code and date.
Given that MI has several applicable ICD codes you can use a select
along the lines of:-
WHERE icd_code IN (  . . . )


I know it doesn't answer your question!

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PSQL Tools

2015-10-19 Thread rob stone
On Sun, 2015-10-18 at 14:48 +, jwienc...@comcast.net wrote:
> 
> Hello
> 
> I am looking for a tool like TOAD for DBA's.   I would like something
> that can generate/extract DDL; report manage tablespace, create
> explain plans, create/mange users, synonyms, triggers, etc.
> 
> 
> regards
> 
> John Wiencek


If you have a JDK 1.7 or above installed, you can use Execute Query (ww
w.executequery.org). You need to install the jdbc driver appropriate to
your Postgres version.

If you define your FK constraints in your schema, it can generate an ER
diagram. It can also do everything else you might want to manipulate a
data base. Another handy feature is import/export from/to either a
delimited file or XML format.

HTH,

Rob





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] *** QUESTION *** After successful 'BEGIN;' command -- why PGSQL_TRANSACTION_ACTIVE and not PGSQL_TRANSACTION_INTRANS?

2015-10-11 Thread rob stone
 
> ETO::0::LOG:  0: connection received: host=127.0.0.1
> port=1083
> ETO::0::LOCATION:  BackendInitialize,
> src\backend\postmaster\postmaster.c:3850
> ETO::0::LOG:  0: connection authorized: user=its-eto_pg36
> database=eto_sql_db
> ETO::0::LOCATION:  PerformAuthentication,
> src\backend\utils\init\postinit.c:239
> ETO::0::LOG:  0: statement: set client_encoding to 'LATIN1'
> ETO::0::LOCATION:  exec_simple_query,
> src\backend\tcop\postgres.c:890
> ETO::0::LOG:  0: duration: 63.000 ms
> ETO::0::LOCATION:  exec_simple_query,
> src\backend\tcop\postgres.c:1118
> ETO::0::LOG:  0: statement: BEGIN;
> ETO::0::LOCATION:  exec_simple_query,
> src\backend\tcop\postgres.c:890
> ETO::0::LOG:  0: duration: 0.000 ms
> ETO::0::LOCATION:  exec_simple_query,
> src\backend\tcop\postgres.c:1118
> ETO::0::LOG:  0: disconnection: session time: 0:00:00.297
> user=its-eto_pg36 database=eto_sql_db host=127.0.0.1 port=1083
> ETO::0::LOCATION:  log_disconnections,
> src\backend\tcop\postgres.c:
> * * *
> * * *
>  
> Thanks For Any Feedback,
>  
> Steve

Hello Steve,

I really do not understand "why" you need the SQLSTATE code after
executing a "BEGIN" so as to go into transaction state.

AFAIK you can only retrieve the SQLSTATE error code when an error
actually occurs. So, if your query statement was successful, then PHP
does not expose a SQLSTATE code of 0.

If I run the following code:-

--


--

it returns the following:-


ERROR:  42P01: relation "rhubarb" does not exist
LINE 1: SELECT * FROM rhubarb
  ^
LOCATION:  parserOpenTable, parse_relation.c:986


SQLSTATE 42P01 is the error "undefined_table".


Note that you have to use pg_send_query to take advantage of
pg_get_result, etc.


HTH,

Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BEGIN, END & ROLLBACK commands -- do they produce a SQLSTATE value?

2015-10-03 Thread rob stone
On Sat, 2015-10-03 at 08:18 -0400, Steve Petrie, P.Eng. wrote:
> Greetings To :
>  
> I am migrating a PHP web application to use PostgreSQL instead of
> MySQL.
>  
> And I have never used PostgreSQL before.
>  
> I have two questions about function pg_result_error_field(...);
>  
> An extensive search of the web, did not yield a solution to my
> problem.
>  
> * * *
> * * *
>  
> My PHP program is using the PHP pg_* PostgreSQL functions.
>  
> The test environment is Windows XP SP3 with: PostgreSQL 9.3, Apache
> 2.2.14 and PHP 5.3.1.
>  
> The file php.ini has a line extension = php_pgsql.dll
>  
> phpinfo() shows (when my PHP program is not running):
> ...
>   pgsql
> PostgreSQL Support   enabled
> PostgreSQL(libpq) Version8.4.1
> Multibyte character support  enabled
> SSL support  enabled
> Active Persistent Links  0
> Active Links 0
> 

I'm a Linux user. However, libpq version 8.4.1 doesn't jell with
version 9.3 of PostgreSql.

So, are you trying to obtain the failure "reason" when a "BEGIN",
"COMMIT" or "ROLLBACK" fails?

Cheers,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-25 Thread rob stone
On Tue, 2015-08-25 at 20:17 -0400, Melvin Davidson wrote:
 I think a lot of people here are missing the point. I was trying to 
 give examples of natural keys, but a lot of people are taking great 
 delight 
 in pointing out exceptions to examples, rather than understanding the 
 point.
 So for the sake of argument, a natural key is something that in 
 itself is unique and the possibility of a duplicate does not exist.
 Before ANYONE continues to insist that a serial id column is good, 
 consider the case where the number of tuples will exceed a bigint.
 Don't say it cannot happen, because it can.
 However, if you have an alphanumeric field, let's say varchar 50, and 
 it's guaranteed that it will never have a duplicate, then THAT is a 
 natural primary 
 key and beats the hell out of a generic id field.
 
 Further to the point, since I started this thread, I am holding to it 
 and will not discuss natural primary keys any further.
 
 Other suggestions for good PostgreSQL Developer database (not web 
 app) guidelines are still welcome.
 

Funny how Melvin's attempt to bring order to the chaos ended up as a
discussion about primary keys.

We once hired a genius to design an application to handle fixed
assets. Every table had a primary key named id. Some were integer and
some were character. So the foreign key columns in child tables had to
be named differently. Writing the joins was complex.

I also know of an airline reservation system where you are unable to
alter your e-mail address. It apparently needs a DBA type person to
make the change. I can only guess that your e-mail address is used as a
foreign key in one or more tables. As well as assigning you a frequent
flyer number they also assign another integer identifier. A bit of
common sense goes a long way when designing an application.

Cheers,
rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using the database to validate data

2015-07-25 Thread rob stone
On Thu, 2015-07-23 at 15:34 -0700, JPLapham wrote:
 Tim Clarke wrote
  Shouldn't be too difficult to import those new rows into one table,
  write a procedure that inserts them into the real table one by one 
  and
  logs the validation failure if any - committing good rows and 
  rolling
  back bad. In fact if you could then write the failures to a third 
  table
  with a completely relaxed (or no) validation?
 
 Tim-
 
 Thanks for your response. Yes, you are right, it shouldn't be too 
 difficult,
 and in fact I have already implemented basically what you suggest, 
 see Idea
 2 from my original message.
 
 The problem with this approach is that it fails to find violations 
 such as
 UNIQUE (there are probably others) from within the input data until 
 after
 the first has been committed to the database. But, the error may have 
 been
 with that earlier row, not the later. 
 
 I want my users to fix all the problems with their data and then load 
 it in
 an all or none fashion. 
 
 -Jon
 
If you have multiple users loading (possibly) multiple files and
(possibly) concurrently, then the only solution is to write some code
to process the data.
You also need to consider load sequence. If user A creates a file that
contains data that will end up creating a new primary key and a file
from user B refers to that, then user B needs to wait until user A's
file has been processed successfully.
Without knowing all the details I can envisage a scenario where data
being loaded could reference good data already in the DB as well as
referencing data that exists within that file load, possibly giving a
rejection or an update of the good data.

My 2 cents worth!

Cheers,
Rob (in Floripa)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to skip duplicate records while copying from CSV to table in Postgresql using COPY

2015-05-24 Thread rob stone

 I think I need to ask more specific way. I have a table say `table1`, where I 
 feed data from different CSV files. Now suppose I have inserted N records to 
 my table `table1` from csv file `c1`. This is ok, next time when again I am 
 importing from a different CSV file say `c2` to `table1`, I just don't want 
 reinsert any record from this new CSV file to table `table1`, if the current 
 CSV data already table has.
 
 How to do this?
 
  My SO link is not a solution to my problem I see now.
 
 -- 
 
 Regards,
 Arup Rakshit
 
 Debugging is twice as hard as writing the code in the first place. Therefore, 
 if you write the code as cleverly as possible, you are, by definition, not 
 smart enough to debug it.
 
 --Brian Kernighan
 
 

Assuming that these CSV files are coming from an external source (e.g.
Bank statement transactions for feeding into a Bank Rec module) then you
need a program to read the file and handle it accordingly. If end users
are running this, then they would probably appreciate a little report
about what was loaded and what was discarded.

On the other hand, if DBA's are running this you could use ExecuteQuery
(written in Java) that has a facility to load CSV files and it will
report the duplicates. However, you can ignore the duplicates and still
commit the non duplicated transactions to the table, if you so desire.
The default for EQ is NOT to run in auto-commit mode, so you have to
actually issue a commit to save your work. However, this option can be
changed in your preferences.

HTH,
Robert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server tries to read a different config file than it is supposed to

2015-05-23 Thread rob stone



On Sat, 2015-05-23 at 04:23 -0700, twoflower wrote:
 I thought I understood how specifying a config file path for the
 server works, but that's apparently not the case.
 
 The cluster data is at /storage/postgresql/9.4/data.
 
 The config files are at /etc/postgresql/9.4/main (this is the default
 location on Ubuntu).
 
 This is how the beginning of /etc/postgresql/9.4/main/postgresql.conf
 looks like:
 
 data_directory = '/storage/postgresql/9.4/data'
 hba_file = '/etc/postgresql/9.4/main/pg_hba.conf'
 ident_file = '/etc/postgresql/9.4/main/pg_ident.conf' 
 
 So I wrote a few scripts to make my life easier, e.g. pg94start.sh:
 
 su postgres -c /usr/lib/postgresql/9.4/bin/pg_ctl
 -D /storage/postgresql/9.4/data -o '-c
 config_file=/etc/postgresql/9.4/main/postgresql.conf' 
 
 But running this script did not work, the server would not start. So I
 checked the log file and there was:
 
 FATAL: could not open file
 /storage/postgresql/9.4/data/postgresql.conf: Permission denied 
 
 After fixing the ownership of this file, it worked.
 
 What's the reason the server was trying to access that file? Why does
 not the override given by the config_file parameter work?
 
 Thank you. 
 
 __
 View this message in context: Server tries to read a different config
 file than it is supposed to
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Similar hassles with Debian. I use scripts like this to start the
server.

#! /bin/sh
PGDATA=/home/postgres/data94;export PGDATA
PATH=$PATH:/usr/lib/postgresql/9.4/bin;export PATH
PGBINARIES=/usr/lib/postgresql/9.4/bin;export PGBINARIES
cd /usr/lib/postgresql/9.4/bin
./pg_ctl start
exit 0

I have different scripts depending on the version of Postgres that I
need. I know that I could have a generic script and just pass in the
variables. I run them as the postgres user.
It reads the conf file from the PGDATA path.

HTH.

Robert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running pg_upgrade under Debian

2015-04-20 Thread rob stone



On Mon, 2015-04-20 at 18:05 -0400, Bruce Momjian wrote:
 On Mon, Apr 20, 2015 at 03:02:48PM -0700, Adrian Klaver wrote:
  But pg_upgrade supports tablespaces, and I assume pg_dump/pg_restore do
  as well.
  
  
  I don't think it is about the underlying programs, it is about
  teaching the wrapper script what do with the choices. Sort of like
  pgAdmin not supporting all pg_backup/pg_restore combinations or for
  that matter pg_restore not knowing what to do with a plain text
  pg_dump file.
 
 Understoo, but I was not aware there was anything special required for
 pg_upgrade to support tablespaces.
 



pg_dump will handle tablespaces when you create a sql style dump file.
However, you have to alter one line where it sets default_tablespace to
null.
Obviously, when you run psql to read the dump file the tablespaces have
to exist. There is no create statement as it won't know where you want
to mount your tablespaces.

I've used my Plan B. Ran pg_dump out of the 9.3 instance, altered that
one line, stopped 9.3, started 9.4, ran psql as the postgres user,
created the user and tablespaces and imported the dump file. It all
worked as intended. So, I now have one of my test databases running
under 9.4.

Cheers,
Robert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running pg_upgrade under Debian

2015-04-19 Thread rob stone



On Fri, 2015-04-17 at 16:16 -0700, Adrian Klaver wrote:
 On 04/17/2015 03:09 PM, rob stone wrote:
  Hello,
 
  I'm trying to upgrade from 9.3 to 9.4 on my laptop and encountering this
  error:-
 
 
  postgres@roblaptop:/usr/lib/postgresql/9.4/bin$ ./pg_upgrade
  -b /usr/lib/postgresql/9.3/bin -B /usr/lib/postgresql/9.4/bin
  -d /home/postgres/data93/userqueries
  -D /home/postgres/data94/userqueries -U pguserqueries
 
  check for /home/postgres/data93/userqueries/base failed: No such file
  or directory
 
  Failure, exiting
  postgres@roblaptop:/usr/lib/postgresql/9.4/bin$
 
  postgres@roblaptop:/usr/lib/postgresql/9.4/bin$ ./pg_upgrade --version
  pg_upgrade (PostgreSQL) 9.4.1
 
 
  I have two tablespaces defined for this data -- one to hold the tables
  and another for the indices.
 
  There is no base file or directory.
 
 Guessing /home/postgres/data*/userqueries/ is the location of your 
 tablespace, not the cluster directory.
 

Correct.

 The -d and -D need to point at the cluster directory, which will be 
 something like :
 
 /var/lib/postgresql/9.3/main
 
 
  Do I just create a dummy directory named base?
 
 No, you need to point to the correct directory.
 
 See here for more information:
 
 http://www.postgresql.org/docs/9.4/interactive/pgupgrade.html
 

Actually, the wiki pages on upgrading gave me some more information,
particularly about specifying where to find postgresql.conf. I adopted
the symlink suggestion.

However, running the following still gives errors:-

postgres@roblaptop:/usr/lib/postgresql/9.4/bin$ ./pg_upgrade
-b /usr/lib/postgresql/9.3/bin -B /usr/lib/postgresql/9.4/bin
-d /var/lib/postgresql/9.3/main -D /var/lib/postgresql/9.4/main -p 5432
-P 5433 -o ' -c listen_addresses=localhost ' -O ' -c
listen_addresses=localhost ' --check --verbose
Running in verbose mode
Performing Consistency Checks
-
Checking cluster versions   ok
pg_control values:

First log segment after reset:00010002
pg_control version number:937
Catalog version number:   201306121
Database system identifier:   6026352641161422830
Latest checkpoint's TimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:  0/684
Latest checkpoint's NextOID:  12036
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Latest checkpoint's oldestXID:674
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  0
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 1
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Maximum size of a TOAST chunk:1996
Date/time type storage:   64-bit integers
Float4 argument passing:  by value
Float8 argument passing:  by value
Data page checksum version:   0
Current pg_control values:

pg_control version number:942
Catalog version number:   201409291
Database system identifier:   6108663141646003887
Latest checkpoint's TimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:  0/719
Latest checkpoint's NextOID:  12142
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Latest checkpoint's oldestXID:675
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  0
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 1
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Maximum size of a TOAST chunk:1996
Size of a large-object chunk: 2048
Date/time type storage:   64-bit integers
Float4 argument passing:  by value
Float8 argument passing:  by value
Data page checksum version:   0


Values to be changed:

First log segment after reset:00010002
/usr/lib/postgresql/9.3/bin/pg_ctl -w -l pg_upgrade_server.log -D
/var/lib/postgresql/9.3/main -o -p 5432 -b  -c
listen_addresses=localhost  -c listen_addresses='' -c
unix_socket_permissions=0700 -c
unix_socket_directories='/usr/lib/postgresql/9.4/bin' start 
pg_upgrade_server.log 21

*failure*
There were problems executing /usr/lib/postgresql/9.3/bin/pg_ctl -w
-l pg_upgrade_server.log -D /var/lib/postgresql/9.3/main -o -p 5432
-b  -c listen_addresses=localhost  -c listen_addresses='' -c
unix_socket_permissions=0700 -c
unix_socket_directories

[GENERAL] Running pg_upgrade under Debian

2015-04-17 Thread rob stone
Hello,

I'm trying to upgrade from 9.3 to 9.4 on my laptop and encountering this
error:-


postgres@roblaptop:/usr/lib/postgresql/9.4/bin$ ./pg_upgrade
-b /usr/lib/postgresql/9.3/bin -B /usr/lib/postgresql/9.4/bin
-d /home/postgres/data93/userqueries
-D /home/postgres/data94/userqueries -U pguserqueries

check for /home/postgres/data93/userqueries/base failed: No such file
or directory

Failure, exiting
postgres@roblaptop:/usr/lib/postgresql/9.4/bin$

postgres@roblaptop:/usr/lib/postgresql/9.4/bin$ ./pg_upgrade --version
pg_upgrade (PostgreSQL) 9.4.1


I have two tablespaces defined for this data -- one to hold the tables
and another for the indices.

There is no base file or directory.

Do I just create a dummy directory named base?

If there is supposed to be a directory or file named base why wasn't
it created by initdb?


Regards,
Robert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running pg_upgrade under Debian

2015-04-17 Thread rob stone



On Sat, 2015-04-18 at 00:25 +0200, Karsten Hilbert wrote:
 On Sat, Apr 18, 2015 at 08:09:43AM +1000, rob stone wrote:
 
  I'm trying to upgrade from 9.3 to 9.4 on my laptop and encountering this
  error:-
  
  
  postgres@roblaptop:/usr/lib/postgresql/9.4/bin$ ./pg_upgrade
  -b /usr/lib/postgresql/9.3/bin -B /usr/lib/postgresql/9.4/bin
  -d /home/postgres/data93/userqueries
  -D /home/postgres/data94/userqueries -U pguserqueries
 
 For what it's worth: Debian provides a
 
   pg_upgradecluster
 
 tailored to its specific setup of PostgreSQL clusters. That
 has worked well for me across several major version bumps.
 
 Karsten
 

Indeed I have that program installed in /usr/lib but the man pages state
that it cannot handle tablespaces.

So, I guess I have to move to Plan B, use pg_dump from the old version
and then import into 9.4.1.

Thanks for your help.

Robert


  check for /home/postgres/data93/userqueries/base failed: No such file
  or directory
  
  Failure, exiting
  postgres@roblaptop:/usr/lib/postgresql/9.4/bin$
  
  postgres@roblaptop:/usr/lib/postgresql/9.4/bin$ ./pg_upgrade --version
  pg_upgrade (PostgreSQL) 9.4.1
  
  
  I have two tablespaces defined for this data -- one to hold the tables
  and another for the indices.
  
  There is no base file or directory.
  
  Do I just create a dummy directory named base?
  
  If there is supposed to be a directory or file named base why wasn't
  it created by initdb?
  
  
  Regards,
  Robert
  
  
  
  -- 
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 
 -- 
 GPG key ID E4071346 @ eu.pool.sks-keyservers.net
 E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
 
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: [GENERAL] partial on-delete set null constraint

2015-01-25 Thread rob stone



On Sun, 2015-01-25 at 14:09 +0100, Alban Hertroys wrote:
  The theory got me intrigued. google 
  (http://en.wikipedia.org/wiki/Codd%27s_12_rules) says:
  rule 3: systematic treatment of null values; hmmm this is a little 
  broader then support for null. I would think, that:
  1. if a sequence of update XX set fk_field=null; then delete YY depending 
  on that FK, for a particular schema definition works ...
  2. so the implementation of FK should support that too ... to be called 
  systematic, right?
  3. and the simplest way to do that for the case at hand, within an on 
  delete action, is to skip those parts of FK, that are marked as not 
  null within the referring table. That would be a requirement for rdbms 
  implementation that claims compliance with Codd rule nr.3 :)
  
  I translated Codd-relationality to English, possibly it’s named differently.
  
  Oddly enough, I can’t find any reference to Codd being responsible for this 
  rule anywhere on the internet. What I did find is that the theory I 
  referred to stems from 1970(!), but that’s the closest I got.
 
 My teachers got back to me; a while ago already, to be fair. They were as 
 surprised that this can't be found on the internet as I was, but it should be 
 possible to find it in printed form. I'd suggest a university library or a 
 good technical book store.
 
 Apparently, this particular theory is explained in E.F. Codd: The relational 
 model for database management. There are probably other books that do too.
 
 Unfortunately, I do not own a copy so I can't verify. If anyone who does own 
 a copy could confirm or even quote the relevant section, that would be great. 
 Better yet, perhaps this should find it's way (back) to the internet? I'm 
 still much surprised that Wikipedia didn't have this.
 
 Oh, and perhaps we could get a reference in the PG docs on primary keys and 
 NULLs to the theory? Do we have such things in the docs? It sounds like a 
 good idea to me, I always find it helpful to know why things are how they 
 are. But I don't write the docs so this is just one for the ideas-box.
 
 Cheers,
 
 Alban Hertroys
 --
 If you can't see the forest for the trees,
 cut the trees and you'll find there is no forest.
 
 
 


The following link 

http://www.databaseanswers.org/codds_rules.htm

sets out Ted Codd's rules according to C.J. Date.

I don't have a copy of the book so I can't verify the accuracy of what
is published at this link.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to insert into 2 tables from a view?

2014-12-23 Thread rob stone



On Tue, 2014-12-23 at 15:00 -0500, Chris Hoover wrote:
 Sorry, in my haste to get the example out, a couple of typo's where in
 the sql.
 
 
 Correct sql:
 BEGIN;
 
 
 CREATE TABLE table1 (
 
   table1_id SERIAL PRIMARY KEY,
   table1_field1 TEXT
 );
 
 
 CREATE TABLE table2 (
   table1_id INTEGER NOT NULL PRIMARY KEY REFERENCES table1(table1_id)
 ON DELETE CASCADE,
   table2_field1 TEXT
 );
 
 
 CREATE VIEW orig_table AS
 SELECT table1_id, table1_field1, table2_field1
   FROM table1
   JOIN table2 USING (table1_id);
 
 
 CREATE FUNCTION orig_table_insert(in_table1_id integer,
 in_table1_field1 text, in_table2_field1 text)
 RETURNS SETOF orig_table
 LANGUAGE plpgsql
 AS
 $BODY$
 DECLARE
 v_table1_id table1.table1_id%TYPE;
 BEGIN
 INSERT INTO table1 (
 table1_id, table1_field1
 ) VALUES (
 COALESCE(in_table1_id, NEXTVAL('table1_table1_id_seq')),
 in_table1_field1
 )   
 RETURNING table1_id
 INTO v_table1_id;
 
 
 INSERT INTO table2 (
 table1_id, table2_field1
 ) VALUES (
 v_table1_id, in_table2_field1
 );
 
 
 RETURN QUERY SELECT table1_id, table1_field1, table2_field1
FROM orig_table
   WHERE table1_id = v_table1_id;
 
 
 END;
 $BODY$;
 
 
 
 CREATE RULE orig_table_insert_rule AS
 ON INSERT
 TO orig_table
 DO INSTEAD
SELECT orig_table_insert(NEW.table1_id, NEW.table1_field1,
 NEW.table2_field1);
 
 
 COMMIT;
 
 
 Problem query:
 insert into orig_table (table1_field1, table2_field1) values
 ('field1', 'field2') returning table1_id;
 
 
 
 
 On Tue, Dec 23, 2014 at 1:46 PM, Chris Hoover revo...@gmail.com
 wrote:
 Hi,
 
 
 I am having a problem trying to figure out.  
 
 
 I have two tables behind a view and am trying to figure out
 how to create the correct insert rule so that inserting into
 the view is redirected to the two tables.  I thought I had is
 solved using a stored procedure, but doing an insert into
 view ... returning id causes the insert to fail with this
 error:
 
 
 
 ERROR:  cannot perform INSERT RETURNING on relation
 orig_view
 HINT:  You need an unconditional ON INSERT DO INSTEAD rule
 with a RETURNING clause
 
 
 We are running pg 9.0 and I think this version of PG is the
 bottleneck to getting this done.  Does anyone know how to get
 around it?  Below is a basic example demonstrating what we are
 wanting to do.
 
 
 CREATE TABLE table1 (
 
   table1_id SERIAL PRIMARY KEY,
   table1_field1 TEXT
 );
 
 
 CREATE TABLE table2 (
   table1_id INTEGER NOT NULL PRIMARY KEY REFERENCES
 table1(table1_id) ON DELETE CASCADE,
   table2_field1 TEXT
 );
 
 
 CREATE VIEW orig_table AS
 SELECT table1_id, table1_field_1, table2_field1
   FROM table1
   JOIN table2 USING (table1_id);
 
 
 CREATE FUNCTION orig_table_insert(in_table1_id integer,
 in_table1_field1 text, in_table2_field1 text)
 RETURNS SETOF orig_table
 LANGUAGE plpgsql
 AS
 $BODY$
 DECLARE
 v_table1_id table1.table1_id%TYPE
 BEGIN
 INSERT INTO table1 (
 table1_id, table1_field1
 ) VALUES (
 in_table1_id, in_table1_field1
 )   
 RETURNING table1_id
 INTO v_table1_id;
 
 
 INSERT INTO table2 (
 table1_id, table2_field1
 ) VALUES (
 v_table_id, in_table2_field1
 );
 
 
 RETURN QUERY SELECT table1_id, table1_field1,
 table2_field1
FROM orig_table
   WHERE table1_id = v_table1_id;
 
 
 END;
 $BODY$;
 
 
 
 CREATE RULE orig_table_insert_rule AS
 ON INSERT
 TO orig_table
 DO INSTEAD
SELECT orig_table_insert(NEW.table1_id,
 NEW.table1_field1, NEW.table2_field1);
 
 
 Thanks,
 
 
 Chris
 
 

Defining a column as SERIAL will automatically create a sequence. You do
not need to supply a value. So:-

INSERT INTO table1 (table1_field1) VALUES (in_table1_field1);
SELECT lastval() INTO last_row_id;

will cause last_row_id to contain the value automatically assigned to
column table1_id. Then you can:-

INSERT INTO table2 (table1_id, 

Re: [GENERAL] i386 postgres on i686 debian with multiarch

2014-11-30 Thread rob stone



On Sun, 2014-11-30 at 15:17 +0400, Damian Dimmich wrote:
 Hi,
 
 I'm trying to set up a replica for a postgres 9.3 instance running on
 an older i386 debian install, replicating to a 64 bit install.  
 
 Having enabled multi-arch** support on the 64 bit debian by running: 
 
 dpkg --add-architecture i386  
 
 and trying to install: 
 
 apt-get install postgresql-9.3:i386 
 
 gives:
 
 Depends: ssl-cert:i386 but it is not installable
 
 Any way of getting postgres i386 to depend on an arch independet
 ssl-cert ? The ssl cert's should be the same on both platforms I
 think?
 
 Has anyone gotten this to work?
 
 Thank you in advance!
 
 Damian
 
 
 ** as per:
 http://www.howtoforge.com/multiarch-how-to-use-32bit-packages-on-a-64bit-system-debian-7-wheezy




Firstly verify the contents of

/var/lib/dpkg/arch

This file usually contains

amd64
i386

when multiarch support is enabled.


You must run

apt-get update

to re-build your package lists AFTER enabling multiarch.

However, I only use 64 bit PostgreSql.

HTH.

Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoiding deadlocks when performing bulk update and delete operations

2014-11-27 Thread rob stone



On Thu, 2014-11-27 at 06:49 -0500, Bill Moran wrote:
 On Thu, 27 Nov 2014 15:07:49 +1100
 Sanjaya Vithanagama svithanag...@gmail.com wrote:
 
  On Wed, Nov 26, 2014 at 11:47 AM, Bill Moran wmo...@potentialtech.com
  wrote:
  
   On Wed, 26 Nov 2014 10:41:56 +1100
   Sanjaya Vithanagama svithanag...@gmail.com wrote:
   
 * How frequently do deadlocks occur?
   
We are seeing deadlocks about 2-3 times per day in the production 
server.
To reproduce the problem easily we've written a simple Java class with
multiple threads calling to the stored procedures running the above
   queries
inside a loop. This way we can easily recreate a scenario that happens 
in
the production.
  
   Don't overcomplicate your solution. Adjust your code to detect the 
   deadlock
   and replay the transaction when it happens. At 2-3 deadlocks per day, it's
   difficult to justify any other solution (as any other solution would be
   more time-consuming to implement, AND would interfere with performance).
  
  When you say replay the transaction, I believe that is to catch the
  exception inside the stored procedure? We've considered that option at one
  state but, the problem with that is we don't have enough context
  information at the stored procedure where this deadlock occurs.
 
 Why not catch it in the application calling the stored procedure?
 
 I don't understand how you could not have enough context to run the command
 you were just trying to run. Can you elaborate on what you mean by that?
 
   I've worked with a number of write-heavy applications that experienced
   deadlocks, some of them on the order of hundreds of deadlocks per day.
   In some cases, you can adjust the queries to reduce the incidence of
   deadlocks, or eliminate the possibility of deadlocks completely.  The
   situation that you describe is not one of those cases, as the planner
   can choose to lock rows in whatever order it thinks it most efficient
   and you don't have direct control over that.
  
   The performance hit you'll take 2-3 times a day when a statement has to
   be replayed due to deadlock will hardly be noticed (although a statement
   that takes 50 seconds will cause raised eyebrows if it runs 2x) but that
   will only happen 2-3 times a day, and the solution I'm proposing won't
   have any performance impact on the other 1300 queries per day that
   don't deadlock.
  
   2-3 deadlocks per day is normal operation for a heavily contented table,
   in my experience.
  
  Given that we have no control over how Postgres performs delete and update
  operations, the only other possibility seems to be to partition this table
  by id_A (so that the individual tables will never be deadlocked). But that
  seems to be a too extreme end option at this stage.
 
 That would be overcomplicating the solution, and almost certainly won't work
 anyway. If you're getting deadlocks, it's because two processes are trying
 to modify the same rows. Even if you partition, those same rows will be on
 the same partition, so you'll still deadlock.
 
 -- 
 Bill Moran
 I need your help to succeed:
 http://gamesbybill.com
 
 

I've been following this discussion for a few days.
In my experience, deadlocks are either caused by poor database design or
within the application.
As you are only talking about a single table, then the problem has to be
with the application.
You mentioned that you are using Java and so can we assume there is a
class for the table with getters and setters?
The table should also have its action factory where there ought to be a
method to handle updates and deletes?
Instead of passing int_a and int_b values via an array, why not have a
method to which you pass an int_a value, an int_b value, a boolean to
indicate either update or delete and the values for update or nulls if
deleting.
Then you can go into transaction state, lock the row for update and
throw all the exceptions you need and gracefully handle any feedback to
your users.
If you cannot lock the row just display an alert Please try again in
five minutes or something similar.

HTH.

Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Referencing serial col's sequence for insert

2014-07-22 Thread rob stone



On Mon, 2014-07-21 at 10:00 +0800, Anil Menon wrote:
 Hi,
 
 
 I have a question on the right/correct practice on using the serial
 col's sequence for insert.
 
 
 Best way of explanation is by an example:
 
 
 create table id01 (col1 serial, col2 varchar(10));
 
 insert into id01(col2) values ( 'data'||
 currval('id01_col1_seq')::varchar);
 
 
 while I do get what I want:
 
 select  * from id01;
  col1 | col2
 --+---
 1 | data1
 
 
 Is this guaranteed to work : I am assuming that an insert triggers the
 id01_col1_seq's nextval first hence using 
 id01_col1_seq's currval subsequently will have the correct /
 expected value (and not the previous value before the insert).
 
 
 Is my assumption correct? 
 
 
 Thanks in advance,
 
 AK
 
 
 
 

I would do the following:-

create table id01 (col1 serial NOT NULL PRIMARY KEY, col2 varchar(10));

In a try . . catch block:-

BEGIN;
INSERT INTO id01 (col2) VALUES ('data');
SELECT lastval() AS last_row_id;
COMMIT; or ROLLBACK; if you have errors.

There is also insert . . returning syntax which can make the value
assigned to the serial column available to your application. I prefer
using the select lastval() method.

HTH.

Robert






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Referencing serial col's sequence for insert

2014-07-22 Thread rob stone



On Tue, 2014-07-22 at 13:32 +, Albe Laurenz wrote:
 rob stone wrote:
  I have a question on the right/correct practice on using the serial
  col's sequence for insert.
 
  Best way of explanation is by an example:
 
  create table id01 (col1 serial, col2 varchar(10));
 
  insert into id01(col2) values ( 'data'||
  currval('id01_col1_seq')::varchar);
 
  while I do get what I want:
 
  select  * from id01;
   col1 | col2
  --+---
  1 | data1
 
  Is this guaranteed to work : I am assuming that an insert triggers the
  id01_col1_seq's nextval first hence using
  id01_col1_seq's currval subsequently will have the correct /
  expected value (and not the previous value before the insert).
 
  Is my assumption correct?
 
  I would do the following:-
  
  create table id01 (col1 serial NOT NULL PRIMARY KEY, col2 varchar(10));
  
  In a try . . catch block:-
  
  BEGIN;
  INSERT INTO id01 (col2) VALUES ('data');
  SELECT lastval() AS last_row_id;
  COMMIT; or ROLLBACK; if you have errors.
  
  There is also insert . . returning syntax which can make the value
  assigned to the serial column available to your application. I prefer
  using the select lastval() method.
 
 Your example seems incomplete.
 
 Also, I think that your method is vulnerable to race conditions:
 If somebody else increments the sequence between the INSERT and
 SELECT lastval() you'd get a wrong value.
 
 The same might hold for the original example.
 
 I would suggest something like that:
 
 WITH seq(i) AS (SELECT nextval('id01_col1_seq'))
INSERT INTO id01 (col1, col2) (SELECT i, 'data' || i FROM seq);
 
 Yours,
 Laurenz Albe

Hi Laurenz,

The documentation in chapter 9.16 says otherwise.
SELECT lastval() may only be called after a nextval which is issued
implicitly by the INSERT statement as a column is defined as SERIAL.
As you are in transaction state (via BEGIN;) I don't believe that
another process accessing the same sequence can upset the SELECT
lastval(). I could be wrong but I don't have the means to test this out.

Regards,
Rob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] php password authentication failed for user ...

2014-07-09 Thread rob stone



On Wed, 2014-07-09 at 14:01 +0200, basti wrote:
 Yes, there is the same problem.
 postgres logs also:
 
 Connection matched pg_hba.conf line 93
 „hostall all 127.0.0.1/32md5“
 
 Am 09.07.2014 13:14, schrieb Bill Moran:
  On Wed, 09 Jul 2014 13:04:19 +0200 basti mailingl...@unix-solution.de 
  wrote:
  
  Hello my pg_hab.conf has this entry:
 
  hostall all 127.0.0.1/32md5
 
  When I try to use PHP to connect to the Database I get
 
  postgres password authentication failed for user ...
 
  The PHP-code looks like
 
 
  global $dbhost, $dbuser, $dbpass, $dbname, $use_pgsql, $dbconn;
 
  if ($use_pgsql)
  {
  $connect = host=$dbhost user=$dbuser password=$dbpass
  dbname=$dbname;
 echo $connect;
  if (!($dbconn = pg_connect($connect)))
  {
  open_page();
  ErrSQL(Unable to connect to database.);
  }
 
  When I try to connect via
  psql -U user  -h localhost -W database
  I can connect without error.
 
  I also use SSL connections, is this the problem? or in other words is
  there a way to connect php via (postgres) ssl?
  
  Have you tried using sslmode or requiressl in the $connect string?
  
 
 


You need to use sslmode=require.

Are you encrypting the password? If not, alter the pg_hba.conf file to
trust instead of md5. If it's all running on localhost it's no big
deal.

Is the postgresql.conf file and your Apache config file in the
sites-available path compatible with SSL usage?

HTH
Robert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.2.4 - timezone error

2014-05-29 Thread rob stone



On Thu, 2014-05-29 at 20:22 +0530, Bhushan Pathak wrote:
 Hello,
 
 
 I have postgresql 9.2.4 running on a CentOS 5 server. A JBOSS AS
 instance also runs on the same server, which connects to postgresql
 server [using defined connection pools].
 
 
 The issue I'm facing is that the JBOSS server is throwing out the
 error -
 org.postgresql.util.PSQLException: FATAL: invalid value for parameter
 TimeZone: America/New_York
 
 
 
 When I looked up on the internet, most common answer was to execute
 the following query from CLI which would expose the real problem -
 
 
 myDB=# SELECT * FROM pg_timezone_names WHERE name =
 'America/New_York';
 ERROR:  could not stat /usr/share/zoneinfo/America/New_York: Too
 many levels of symbolic links
 
 
 What does this really mean? Why is postgresql complaining for symbolic
 links? Any ideas what has gone wrong? Any thing that needs checking?
 
 
 Thanks
 
 
 Bhushan Pathak


We are still running 9.2.4.

This query works fine here:-

SELECT * FROM pg_timezone_names WHERE name = '$tz_name'::text

HTH
Robert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   >