On Fri, Sep 2, 2016 at 8:36 PM, dandl wrote:
> The paper is substantially in agreement with the presentation I quoted. If
> there are differences in detail, they certainly don't dominate his argument.
My point is that the paper is rather light on details of the kind that
are really important. An
> >> > I wondered if there are any figures or measurements on Postgres
> >> > performance in this ‘enough memory’ environment to support or
> >> > contest this point of view?
> >
> > I don't think that's really answerable without individual use-cases
> in
> > mind. Answering that question for anal
On Sat, Sep 3, 2016 at 10:45:47AM +1000, dandl wrote:
> > Agreed. Stonebraker measured Shore DBMS, which is an academic
> > database:
> >
> > http://research.cs.wisc.edu/shore/
> >
> > If he had measured a production-quality database that had been
> > optimized like Postgres, I would take mo
> Agreed. Stonebraker measured Shore DBMS, which is an academic
> database:
>
> http://research.cs.wisc.edu/shore/
>
> If he had measured a production-quality database that had been
> optimized like Postgres, I would take more stock of his "overhead"
> numbers.
Exactly! And that's what I'm
> > > http://slideshot.epfl.ch/play/suri_stonebraker
> > >
> > >
> > >
> > > He makes the claim that in a modern ‘big iron’ RDBMS such as
> Oracle,
> > > DB2, MS SQL Server, Postgres, given enough memory that the entire
> > > database lives in cache, the server will spend 96% of its memory
> > > cy
> > Re this talk given by Michael Stonebraker:
> >
> > http://slideshot.epfl.ch/play/suri_stonebraker
> >
> >
> >
> > He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle,
> > DB2, MS SQL Server, Postgres, given enough memory that the entire
> > database lives in cache, the server wi
On Fri, Sep 2, 2016 at 10:32 AM, Andres Freund wrote:
>
>> > I wondered if there are any figures or measurements on Postgres performance
>> > in this ‘enough memory’ environment to support or contest this point of
>> > view?
>
> I don't think that's really answerable without individual use-cases i
On 8/29/16 6:28 AM, Tom Lane wrote:
Pavel Stehule writes:
> 2016-08-29 1:59 GMT+02:00 Jim Nasby :
>> It would be nice if there was a way to pass dynamically formed records
>> around, similar to how you can pass the results of row() around. Someone
>> else has actually be asking about this at h
On Fri, Sep 2, 2016 at 2:58 PM, Amee Sankhesara - Quipment India <
amee.sankhes...@quipment.nl> wrote:
> Hi,
>
>
>
> I have setup PITR in PostgreSQL. I am taking base backup at every specific
> interval and also kept WAL files of size 16 MB each.
>
>
>
> Now the situation is that even there is no
On Fri, Sep 2, 2016 at 2:40 PM, Patrick B wrote:
>
>
> 2016-09-02 15:36 GMT+12:00 Venkata B Nagothi :
>
>>
>> On Fri, Sep 2, 2016 at 12:48 PM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> I'll be performing a migration on my production master database server,
>>> which is running PostgreSQL 9.2 a
On Fri, Sep 2, 2016 at 10:32:46AM -0700, Andres Freund wrote:
> On 2016-09-02 11:10:35 -0600, Scott Marlowe wrote:
> > On Fri, Sep 2, 2016 at 4:49 AM, dandl wrote:
> > > Re this talk given by Michael Stonebraker:
> > >
> > > http://slideshot.epfl.ch/play/suri_stonebraker
> > >
> > >
> > >
> > > H
On Thu, Sep 1, 2016 at 8:48 PM, Patrick B wrote:
> Hi guys,
>
> I'll be performing a migration on my production master database server,
> which is running PostgreSQL 9.2 atm, from SATA disks to SSD disks.
> I've got some questions about it, and it would be nice if u guys could share
> your experie
Jennyfer Sanchez writes:
> I was able to install postgresql but I got an error after putting the
> command /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/ to initialize
> the database. The error that I am getting is
> FATAL: wrong number of index expressions
> STATEMENT: CREATE TRIGGER pg
On 09/01/2016 01:22 PM, Jennyfer Sanchez wrote:
Hi,
I was able to install postgresql but I got an error after putting the
command |/usr/local/pgsql/bin/initdb||-D ||/usr/local/pgsql/data/ to
initialize the database. The error that I am getting is|
|FATAL:wrong number ofindexexpressions
STATEMEN
>
> ...
>
> You can actually reduce the time more by pre-syncing to the new location.
> something like:
>
> rsync -va /var/lib/pgsql/ /var/lib/pgsql2/
> service postgres stop
> rsync -va /var/lib/pgsql/ /var/lib/pgsql2/
>
> The second rsync will only copy the deltas from the first, it still has to
You can reduce the time much further by pre copying the files. Then during
the maintenance window only copy the deltas basically.
On Sep 1, 2016 9:43 PM, "Patrick B" wrote:
>
>
> 2016-09-02 15:36 GMT+12:00 Venkata B Nagothi :
>
>>
>> On Fri, Sep 2, 2016 at 12:48 PM, Patrick B
>> wrote:
>>
>>> H
Hi,
I was able to install postgresql but I got an error after putting the
command /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/ to initialize
the database. The error that I am getting is
FATAL: wrong number of index expressions
STATEMENT: CREATE TRIGGER pg_sync_pg_database AFTER INSER
Check out Voltdb (or Scylladb which is more different) for the changes in
architecture required to achieve those performance increases.
On Fri, Sep 2, 2016 at 7:32 PM, Andres Freund wrote:
> On 2016-09-02 11:10:35 -0600, Scott Marlowe wrote:
> > On Fri, Sep 2, 2016 at 4:49 AM, dandl wrote:
> >
Hi,
I have setup PITR in PostgreSQL. I am taking base backup at every specific
interval and also kept WAL files of size 16 MB each.
Now the situation is that even there is no any major change in database, it
suddenly started creating too many WAL files.
I have gathered statistics with c
Hi,
Is there any reason filters are not supported for search+bind LDAP
authentication?
There is no option to pass a filter in search+bind mode and
"ldapsearchattribute" parameter is even checked to prevent filter injection.
"ldapurl" parameter is defined as an RFC 4516 LDAP URL except filters
Hello,
I have been looking around in the documentation and I didn't find anything, so
I wonder if there is support in bdr for having transactions that happen while
the global lock is acquired and get replicated everywhere before the
transaction ends.
Is there a way to achieve this?
Best
--
Hi,
My company is using PGDATA to store configuration files and the guc
data_directory to give the path to the instance directory.
They would use it like this:
pg_ctl start -D -w
with this directory setup:
/CONFDIR => postgresql.conf pg_hba.conf pg_ident.conf
/SYSTEM => All the normal stuff i
Alexander Farber writes:
> If I'd like to always return exactly 1 row -
> why wouldn't just RETURN work?
Because RETURNS TABLE means it's RETURNS SETOF something,
which means the number of rows it produces is equal to the
number of RETURN NEXTs executed. RETURN, per se, has exactly
zero impact o
On 09/02/2016 10:35 AM, Alexander Farber wrote:
If I'd like to always return exactly 1 row -
why wouldn't just RETURN work?
(That's what I kept trying)
Because you are using RETURNS TABLE. Postgres(plpgsql) has no way of
knowing what number of rows you are going to return.
On Fri, Sep 2, 2
If I'd like to always return exactly 1 row -
why wouldn't just RETURN work?
(That's what I kept trying)
On Fri, Sep 2, 2016 at 7:27 PM, Pavel Stehule
wrote:
>
>
> 2016-09-02 19:21 GMT+02:00 Alexander Farber :
>
>> why doesn't this simple test function return a row with 42, NULL values:
>>
>> CR
On 2016-09-02 11:10:35 -0600, Scott Marlowe wrote:
> On Fri, Sep 2, 2016 at 4:49 AM, dandl wrote:
> > Re this talk given by Michael Stonebraker:
> >
> > http://slideshot.epfl.ch/play/suri_stonebraker
> >
> >
> >
> > He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS
> > S
2016-09-02 19:21 GMT+02:00 Alexander Farber :
> Good evening,
>
> please help me to figure out, why doesn't this simple test function return
> a row with 42, NULL values:
>
> CREATE OR REPLACE FUNCTION words_merge_users_2(
> IN in_users jsonb,
> IN in_ip inet
>
Good evening,
please help me to figure out, why doesn't this simple test function return
a row with 42, NULL values:
CREATE OR REPLACE FUNCTION words_merge_users_2(
IN in_users jsonb,
IN in_ip inet
) RETURNS TABLE (
out_uid integer,
On Fri, Sep 2, 2016 at 4:49 AM, dandl wrote:
> Re this talk given by Michael Stonebraker:
>
> http://slideshot.epfl.ch/play/suri_stonebraker
>
>
>
> He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS
> SQL Server, Postgres, given enough memory that the entire database liv
>
>
> So is that really possible? Just copy the data between folders? if so,
> i'll probably chose option 2!!!
> Even that is 2.5TB I don't think the copy will take longer than 20
> minutes... and I'd still be able to perform reads...
>
> I'll do some test to see if option 2 can be done :)
>
> Than
Thanks a lot Tom, it works as expected now. When reading the docs I must
have skipped to the code block without properly reading the sentence above.
Brent
On Fri, Sep 2, 2016 at 11:19 AM, Tom Lane wrote:
> Brent Douglas writes:
> > pg_zlib.c:24:1: warning: type specifier missing, defaults to '
Brent Douglas writes:
> pg_zlib.c:24:1: warning: type specifier missing, defaults to 'int'
> [-Wimplicit-int]
> PG_MODULE_MAGIC;
> ^
> 1 warning generated.
Ah, I'd thought you'd just misread the documentation, but looking closer
I realize that you're probably not understanding the significance of
On Thu, Sep 1, 2016 at 3:08 PM, Igor Neyman wrote:
> Don’t know about plans to implement clustered indexes in PostgreSQL.
>
It was discussed on the mailing list in the past.
I found an interesting thread dated from 2012 about integrating pg_reorg
(the ancestor of pg_repack) in PostgreSQL core:
Hi Tom,
Ok, I think I miss-understood the docs. I have the macro in the extension
anyway, so basically the cause it that somewhere I'm picking up non 9.5.3
libs when I'm compiling? I upgraded from 9.4.5 not too long ago but I can't
find any traces of it left on my system to be interfering.
Brent
Brent Douglas writes:
> What I have found however is that when I go to load my extensions I get
> this error:
> test=# create extension pg_zlib;
> ERROR: incompatible library "/usr/local/lib/postgresql/pg_zlib.so":
> missing magic block
> HINT: Extension libraries are required to use the PG_MOD
Hello all,
I'm not sure if this is the correct list for extensions, if not please let
me know where this should be sent too.
I have a mid sized append only table that is read fairly infrequently and
contains 2 columns that contain on average about 0.5Kb and 2Kb of xml per
row. In an attempt to sa
> >>> In my particular situation the case I care about is when the
> result
> >> of an UPDATE is two identical rows. All I really want is a DISTINCT
> >> option.
> >>
> >> Assuming I am following correctly what you want is that the result
> of
> >> an UPDATE not be two identical rows.
> >
> > Corre
On 09/01/2016 05:08 PM, dandl wrote:
In my particular situation the case I care about is when the result
of an UPDATE is two identical rows. All I really want is a DISTINCT
option.
Assuming I am following correctly what you want is that the result of
an UPDATE not be two identical rows.
Corre
Jonas Tehler wrote:
> We’re running Postgresql 9.4.5 on Amazon RDS. One of our tables looks
> something like this:
>
>
> CREATE TABLE users
> (
> ...
> email character varying(128) NOT NULL,
> ...
> CONSTRAINT users_email_key UNIQUE (email)
> )
>
> Despite this we have rows with very si
On 09/02/2016 04:32 AM, Jonas Tehler wrote:
Hi,
We’re running Postgresql 9.4.5 on Amazon RDS. One of our tables
looks something like this:
CREATE TABLE users
(
...
email character varying(128) NOT NULL,
...
CONSTRAINT users_email_key UNIQUE (email)
)
Despite this we have rows with ver
Hi,
We’re running Postgresql 9.4.5 on Amazon RDS. One of our tables looks something
like this:
CREATE TABLE users
(
...
email character varying(128) NOT NULL,
...
CONSTRAINT users_email_key UNIQUE (email)
)
Despite this we have rows with very similar email values. I discovered the
pro
Re this talk given by Michael Stonebraker:
http://slideshot.epfl.ch/play/suri_stonebraker
He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS
SQL Server, Postgres, given enough memory that the entire database lives in
cache, the server will spend 96% of its memory cy
42 matches
Mail list logo