On 07/05/2017 08:31 AM, Hans Schou wrote:
2017-07-05 15:41 GMT+02:00 Adrian Klaver >:
[scheme[<+>dsn]]://[[username[:[password]]@][host][:port][/[dbname][/[[table[/[column[,column...]*]]]|sql]]]
2017-07-05 15:41 GMT+02:00 Adrian Klaver :
>
> [scheme[<+>dsn]]://[[username[:[password]]@][host][:port][/[
> dbname][/[[table[/[column[,column...]*]]]|sql]]]
> ^
>
> The thing is that in a quick search on this I did not find a
2017-07-05 15:15 GMT+02:00 Albe Laurenz :
>
> Unless I misunderstand, this has been in PostgreSQL since 9.2:
>
Sorry! I did not read the *new* manual.
(OK, 9.2 is not that new)
It is even mentioned in the man page.
Then I have a new proposal. Write a note about in
On 07/05/2017 06:15 AM, Albe Laurenz wrote:
Hans Schou wrote:
The dburl (or dburi) has become common to use by many systems connecting to a
database.
The feature is that one can pass all parameters in a string, which has similar
pattern as
http-URI do.
Especially when using psql in a script,
Hans Schou wrote:
> The dburl (or dburi) has become common to use by many systems connecting to a
> database.
> The feature is that one can pass all parameters in a string, which has
> similar pattern as
> http-URI do.
>
> Especially when using psql in a script, having the credentials in one
## Hans Schou (hans.sc...@gmail.com):
> Example of usage:
> psql pgsql://joe:p4zzw...@example.org:2345/dbname
Make the scheme "postgresql" and you're here:
https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING
"32.1.1.2. Connection URIs".
Regards,
Christoph
--
Hi
The dburl (or dburi) has become common to use by many systems connecting to
a database. The feature is that one can pass all parameters in a string,
which has similar pattern as http-URI do.
Especially when using psql in a script, having the credentials in one
string is convenient.
The syntax
Hi all;
I had a pleasant surprise today when demonstrating a previous misfeature in
PostgreSQL behaved unexpectedly. In further investigation, there is a
really interesting syntax which is very helpful for some things I had not
known about.
Consider the following:
CREATE TABLE keyvaltest (
On 08/01/2014 04:57 PM, Chris Travers wrote:
Hi all;
I had a pleasant surprise today when demonstrating a previous misfeature
in PostgreSQL behaved unexpectedly. In further investigation, there is
a really interesting syntax which is very helpful for some things I had
not known about.
On 08/01/2014 06:28 PM, Vik Fearing wrote:
So with all this in mind, is there any reason why we can't or shouldn't
allow:
CREATE testfunction(test) returns int language sql as $$ select 1; $$;
SELECT testfunction FROM test;
That would allow first-class calculated columns.
I
Tom Lane wrote:
=?UTF-8?Q?Nils_G=C3=B6sche?= car...@cartan.de writes:
I was quite surprised to find that this wasn't possible. Is there any
good reason why not?
It's contrary to SQL standard is why not. And it's not just a matter
of
being outside the spec, as inheritance is; this is
Le mercredi 18 avril 2012 à 00:06 +0200, Nils Gösche a écrit :
Bartosz Dmytrak wrote:
The reason I like this particular way of modeling the data is that I have a
guarantee that there won't be an entry in both derived tables at the same
time for the same row in the base table; also, I can
Vincent Veyron wrote:
use a trigger on each of the derived tables, that cancels any insert if
the same id already exists in the other table?
Yes, that would work.
You don't say how your data gets inserted, but considering how
complicated your preferred option looks, I have to ask why you
Hi,
according to DB theory:
*1NF: Table faithfully represents a relation and has no repeating groups*
*2NF: No non-prime attribute in the table is functionally dependent on a proper
subset of anycandidate key.*
source: http://en.wikipedia.org/wiki/Database_normalization#Normal_forms
so these
Bartosz Dmytrak wrote:
according to DB theory:
1NF: Table faithfully represents a relation and has no repeating groups
2NF: No non-prime attribute in the table is functionally dependent on a
proper subset of anycandidate key.
source:
Hi!
I have a little feature proposal. Let me try to explain the motivation
behind it.
Suppose our application has two types of objects, looking somewhat like
this:
abstract class Base
{
public int Id;
public int SomeData;
}
class Derived1 : Base
{
public int Data1;
}
class
Hi,
how about inheritance in postgres?
CREATE TABLE tblBase
(
id serial NOT NULL, -- serial type is my assumption.
SomeData integer,
CONSTRAINT tblBase_pkey PRIMARY KEY (id )
)
WITH (
OIDS=FALSE
);
CREATE TABLE tblDerived1
(
-- Inherited from table tblBase: id integer NOT NULL DEFAULT
Bartosz Dmytrak wrote:
how about inheritance in postgres?
I know about Postgres' inheritance feature, but would prefer a more standard
relational solution.
With this approach all IDs will use the same sequence so there will not be
duplicated PKs in inherited tables.
In my case, the
=?UTF-8?Q?Nils_G=C3=B6sche?= car...@cartan.de writes:
Bartosz Dmytrak wrote:
how about inheritance in postgres?
I know about Postgres' inheritance feature, but would prefer a more standard
relational solution.
[ blink... ] That seems like a pretty silly argument for proposing
something that
On 26 Sie, 08:06, wstrzalka wstrza...@gmail.com wrote:
On 26 Aug, 01:28, pie...@hogranch.com (John R Pierce) wrote:
On 08/25/10 11:47 AM, Wojciech Strzałka wrote:
The data set is 9mln rows - about 250 columns
Having 250 columns in a single table sets off the 'normalization' alarm
On 26 Aug, 01:28, pie...@hogranch.com (John R Pierce) wrote:
On 08/25/10 11:47 AM, Wojciech Strzałka wrote:
The data set is 9mln rows - about 250 columns
Having 250 columns in a single table sets off the 'normalization' alarm
in my head.
--
Sent via pgsql-general mailing list
Excerpts from wstrzalka's message of jue ago 26 03:18:36 -0400 2010:
So after turning off fsync synchronous_commit (which I can afford as
I'm populating database from scratch)
I've stucked at 43 minutes for the mentioned table. There is no PK,
constrains, indexes, ... - nothing except for
No I don't, but definitely will try tomorrow
Excerpts from wstrzalka's message of jue ago 26 03:18:36 -0400 2010:
So after turning off fsync synchronous_commit (which I can afford as
I'm populating database from scratch)
I've stucked at 43 minutes for the mentioned table. There is no PK,
On ons, 2010-08-25 at 00:15 -0700, wstrzalka wrote:
I'm currently playing with very large data import using COPY from
file.
As this can be extremely long operation (hours in my case) the nice
feature would be some option to show operation progress - how many
rows were already imported.
A
On Wed, Aug 25, 2010 at 08:47:10PM +0200, Wojciech Strzaaaka wrote:
The data set is 9mln rows - about 250 columns
250 columns sounds very strange to me as well! I start to getting
worried when I hit a tenth of that.
CPU utilization - 1,2% (half of the one core)
iostat shows writes ~6MB/s,
On Wed, Aug 25, 2010 at 8:48 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
synchronous_commit also has effects on data safety. It permits the loss of
transactions committed within the commit delay interval if the server
crashes. If you turn it on, you need to decide how much recent work
Heyho!
On Wednesday 25 August 2010 09.15:33 wstrzalka wrote:
I'm currently playing with very large data import using COPY from
file.
As this can be extremely long operation (hours in my case) the nice
feature would be some option to show operation progress - how many
rows were already
I'm currently playing with very large data import using COPY from
file.
As this can be extremely long operation (hours in my case) the nice
feature would be some option to show operation progress - how many
rows were already imported.
Or maybe there is some way to do it? As long as postgres have
On 2010-08-25, wstrzalka wrote:
I'm currently playing with very large data import using COPY from
file.
As this can be extremely long operation (hours in my case) the nice
feature would be some option to show operation progress - how many
rows were already imported.
Or maybe there is
Le 25.08.2010 09:15, wstrzalka a écrit :
I'm currently playing with very large data import using COPY from
file.
As this can be extremely long operation (hours in my case) the nice
feature would be some option to show operation progress - how many
rows were already imported.
Or maybe there is
On Wed, 2010-08-25 at 17:06 +0200, Denis BUCHER wrote:
Le 25.08.2010 09:15, wstrzalka a crit :
I'm currently playing with very large data import using COPY from
file.
As this can be extremely long operation (hours in my case) the nice
feature would be some option to show operation
On Wed, 2010-08-25 at 12:20 -0400, Eric Comeau wrote:
Without even changing any line of data or code in sql !
Incredible, isn't it ?
Curious- what postgresql.conf settings did you change to improve it?
The most obvious would be to turn fsync off, sychronous_commit off,
increase
On 08/25/2010 12:30 PM, Joshua D. Drake wrote:
On Wed, 2010-08-25 at 12:20 -0400, Eric Comeau wrote:
Without even changing any line of data or code in sql !
Incredible, isn't it ?
Curious- what postgresql.conf settings did you change to improve it?
The most obvious would be to turn
Yea - I'll try to optimize as I had a plan to write to
pgsql.performance for rescue anyway.
I don't know exact hardware specification yet - known facts at the
moment are:
Sun Turgo?? (SPARC) with 32 cores
17GB RAM (1GB for shared buffers)
hdd - ?
OS - Solaris 10 - the system is running
On 08/25/10 11:47 AM, Wojciech Strzałka wrote:
The data set is 9mln rows - about 250 columns
Having 250 columns in a single table sets off the 'normalization' alarm
in my head.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
On 26/08/2010 1:06 AM, Steve Clark wrote:
On 08/25/2010 12:30 PM, Joshua D. Drake wrote:
On Wed, 2010-08-25 at 12:20 -0400, Eric Comeau wrote:
Without even changing any line of data or code in sql !
Incredible, isn't it ?
Curious- what postgresql.conf settings did you change to improve
36 matches
Mail list logo