Re: [GENERAL] Is auto-analyze as thorough as manual analyze?

2017-09-23 Thread Melvin Davidson
On Sat, Sep 23, 2017 at 6:10 PM, Jack Christensen 
wrote:

> Just had an issue where a prepared query would occasionally choose a very
> bad plan in production. The same data set in a different environment
> consistently would choose the index scan. As would be expected, running
> analyze on that table in production resolved the issue.
>
> However, before I ran the analyze I checked pg_stat_user_tables to see
> last_autoanalyze for that table. It had run today. But the problem existed
> before that. I would have expected that the auto-analyze would have
> corrected this (or prevented it entirely if run enough).
>
> So that leaves me wondering: is an auto-analyze the same as manually
> running analyze or is a manual analyze more thorough? This is running
> version 9.6.3 on Heroku.
>
> Thanks,
>
> Jack
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>is an auto-analyze the same as manually running analyze or is a manual
analyze more thorough?

It's not that one is "more thorough" than the other, it's that
autovacuum_analyze will only kick in when it meets
one of the following conditions:

autovacuum_analyze_scale_factor  0.1  #Number of tuple inserts,
updates, or deletes prior to analyze as a fraction of reltuples.
autovacuum_analyze_threshold 50#Minimum number of tuple
inserts, updates, or deletes prior to analyze.

https://www.postgresql.org/docs/9.6/static/runtime-config-autovacuum.html

Note: You can adjust the settings for individual tables.
EG:

*ALTER TABLE some_schema.your_table SET (autovacuum_vacuum_scale_factor = 0.5);
ALTER TABLE some_schema.your_table SET (autovacuum_vacuum_threshold = 1000);*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Is auto-analyze as thorough as manual analyze?

2017-09-23 Thread Jack Christensen
Just had an issue where a prepared query would occasionally choose a 
very bad plan in production. The same data set in a different 
environment consistently would choose the index scan. As would be 
expected, running analyze on that table in production resolved the issue.


However, before I ran the analyze I checked pg_stat_user_tables to see 
last_autoanalyze for that table. It had run today. But the problem 
existed before that. I would have expected that the auto-analyze would 
have corrected this (or prevented it entirely if run enough).


So that leaves me wondering: is an auto-analyze the same as manually 
running analyze or is a manual analyze more thorough? This is running 
version 9.6.3 on Heroku.


Thanks,

Jack




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is float8 a reference type?

2017-09-23 Thread Paul A Jungwirth
On Sat, Sep 23, 2017 at 9:40 AM, Tom Lane  wrote:
> I wonder whether you're using up-to-date Postgres headers (ones
> where Float8GetDatum is a static inline function).

I'm building against 9.6.3 on both machines. I'm not doing anything
special to change the compilation options. Here is my whole Makefile:

MODULES = floatfile
EXTENSION = floatfile
EXTENSION_VERSION = 1.0.0
DATA = floatfile--$(EXTENSION_VERSION).sql

PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)

But what I'm really interested in is this: What are the bad things
that can happen if I do `datums = (Datum *)floats`, as long as it's
only when Datums are 8 bytes wide? Is there a platform with
pass-by-val float8s where that won't work?

Thanks,
Paul


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is float8 a reference type?

2017-09-23 Thread Tom Lane
Paul A Jungwirth  writes:
> Since I'm expecting ~10 million elements per array, it seems like
> skipping the conversion will have a real effect. I checked the
> assembly and do see a difference (on both Mac+clang and Linux+gcc).

I wonder whether you're using up-to-date Postgres headers (ones
where Float8GetDatum is a static inline function).  For me, both
of those platforms recognize it as a no-op --- in fact, clang
turns a loop like

for (i = 0; i < n; i++) {
datums[i] = Float8GetDatum(floats[i]);
}

into something that looks suspiciously like an inlined, loop-unrolled
memcpy().

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding Cyrillic support

2017-09-23 Thread Tom Lane
John R Pierce  writes:
> On 9/23/2017 1:44 AM, Job wrote:
>> how can i add more character support to PostgreSql 9.6?
>> I need to also store some information in Cyrillic, for Russian users.

> utf-8 should be able to store just about any character.

Or there's about four different Cyrillic-specific character sets
available already:

https://www.postgresql.org/docs/current/static/multibyte.html#CHARSET-TABLE

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why can't the database owner create schemas and how can I enable that?

2017-09-23 Thread Tom Lane
John R Pierce  writes:
> On 9/22/2017 10:29 PM, Tim Uckun wrote:
>> The app can now connect to the database but it can't create any 
>> tables, schemas or anything else unless I give it superuser privileges.

> that should have worked just fine.

Yeah.  Note that creating new schemas is not directly connected to
ownership anyway --- it's a question of whether you have the CREATE
privilege on the database.  The owner should have that privilege
by default, but it could be revoked, or granted to others.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is float8 a reference type?

2017-09-23 Thread Paul A Jungwirth
On Fri, Sep 22, 2017 at 8:38 PM, Tom Lane  wrote:
> "Premature optimization is the root of all evil".  Do you have good reason
> to think that it's worth your time to write unsafe/unportable code?  Do
> you know that your compiler doesn't turn Float8GetDatum into a no-op
> already?  (Mine does, on a 64-bit machine.)

Ha ha, thank you for keeping me honest! But can you explain what is
unsafe about the cast? For a little more context: I've loaded a float8
array from a file, but I need to pass a Datum array to
construct_md_array. With an 8-byte Datum, I can just pass the original
float array, right? But with smaller Datums I need to go through the
array and convert each element. (I'm not really worried about these
files being moved between machines, so I'm willing to make the on-disk
format the same as the in-memory format.)

Since I'm expecting ~10 million elements per array, it seems like
skipping the conversion will have a real effect. I checked the
assembly and do see a difference (on both Mac+clang and Linux+gcc).
Here is the Mac command line:

platter:floatfile paul$ clang -Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -Wno-unused-command-line-argument -O2  -I. -I./
-I/usr/local/Cellar/postgresql@9.6/9.6.3/include/server
-I/usr/local/Cellar/postgresql@9.6/9.6.3/include/internal
-I/usr/local/opt/gettext/include -I/usr/local/opt/openldap/include
-I/usr/local/opt/openssl/include -I/usr/local/opt/readline/include
-I/usr/local/opt/tcl-tk/include -g -S -o floatfile.s floatfile.c

Here is the assembly for the cast:

  .loc2 391 23 is_stmt 1  ## floatfile.c:391:23
  movq-48(%rbp), %r15
Ltmp176:
  ##DEBUG_VALUE: load_floatfile:datums <- %R15

Here is the assembly for the loop (after just changing the code to `if
(FLOAT8PASSBYVAL && false)`):

  .loc2 393 21 is_stmt 1  ## floatfile.c:393:21
  movslq%r15d, %r13
  .loc2 393 28 is_stmt 0  ## floatfile.c:393:28
  leaq(,%r13,8), %rdi
  .loc2 393 14## floatfile.c:393:14
  callq_palloc
  movq%rax, %r12
Ltmp177:
  ##DEBUG_VALUE: load_floatfile:i <- 0
  .loc2 394 19 is_stmt 1 discriminator 1 ## floatfile.c:394:19
  testl%r15d, %r15d
Ltmp178:
  .loc2 394 5 is_stmt 0 discriminator 1 ## floatfile.c:394:5
  jeLBB7_11
Ltmp179:
## BB#9:
  ##DEBUG_VALUE: load_floatfile:arrlen <- %R15D
  ##DEBUG_VALUE: load_floatfile:nulls <- [%RBP+-80]
  ##DEBUG_VALUE: load_floatfile:floats <- [%RBP+-72]
  ##DEBUG_VALUE: load_floatfile:filename <- %RBX
  .loc2 0 5 discriminator 1   ## floatfile.c:0:5
  movq-72(%rbp), %rbx
Ltmp180:
  ##DEBUG_VALUE: load_floatfile:floats <- %RBX
  xorl%r14d, %r14d
Ltmp181:
  .p2align4, 0x90
LBB7_10:## =>This Inner Loop Header: Depth=1
  ##DEBUG_VALUE: load_floatfile:floats <- %RBX
  ##DEBUG_VALUE: load_floatfile:arrlen <- %R15D
  ##DEBUG_VALUE: load_floatfile:nulls <- [%RBP+-80]
  .loc2 395 34 is_stmt 1  ## floatfile.c:395:34
  movsd(%rbx,%r14,8), %xmm0## xmm0 = mem[0],zero
  .loc2 395 19 is_stmt 0  ## floatfile.c:395:19
  callq_Float8GetDatum
  .loc2 395 17## floatfile.c:395:17
  movq%rax, (%r12,%r14,8)
Ltmp182:
  .loc2 394 30 is_stmt 1 discriminator 2 ## floatfile.c:394:30
  incq%r14
  .loc2 394 19 is_stmt 0 discriminator 1 ## floatfile.c:394:19
  cmpq%r13, %r14
Ltmp183:
  .loc2 394 5 discriminator 1 ## floatfile.c:394:5
  jlLBB7_10
Ltmp184:
LBB7_11:
  ##DEBUG_VALUE: load_floatfile:arrlen <- %R15D
  ##DEBUG_VALUE: load_floatfile:nulls <- [%RBP+-80]

I get the same results on gcc too: the palloc, the loop, and even
`call Float8GetDatum@PLT`.

I'll do some timing of each version too, but it doesn't look like a
pointless optimization. I'd still like to know what is unsafe about it
though.

Thanks!
Paul


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] shared_buffers smaller than max_wal_size

2017-09-23 Thread Tomas Vondra
Hi,

On 09/23/2017 08:18 AM, Vladimir Mihailenco wrote:
> Hi,
> 
> I wonder what is the point of setting max WAL size bigger than shared
> buffers, e.g.
> 
> shared_buffers = 512mb
> max_wal_size = 2gb
> 
> As I understand a checkpoint happens after 2gb of data were modified
> (writter to WAL), but shared buffers can contain at most 512mb of dirty
> pages to be flushed to the disk. Is it still a win or I am missing
> something?

Those are mostly unrelated things.

max_wal_size determines how often you'll do checkpoints. So with a lot
of writes you probably need high max_wal_size, otherwise you'll do
checkpoints very often. Choose reasonable checkpoint_timeout and set
max_wal_size based on that.

Shared buffers are mostly about caching data accessed by queries. If you
can squeeze the frequently accessed data into shared buffers (high cache
hit ratio), great.

Moreover, there's very little relation between max_wal_size and
shared_buffers, for a number of reasons:

1) You can modify the same 8kB page repeatedly - it will still be just
8kB of dirty data in shared buffers, but each update will generate a
little bit of WAL data. In an extreme case a single 8kB page might be
responsible for most of the 2GB of WAL data.

2) When changing the data page, we only really write the minimum amount
of data describing the change into WAL. So it's not 1:1.

3) When a page is evicted from shared buffers, we don't fsync it to disk
immeditely. We write it out to page cache, and leave the eviction to the
OS (with some exceptions), so it's asynchronous. WAL writes are
asynchronous.

4) Shared buffers are not just about dirty data, it's also about caching
reads. No one knows what is the read:write ratio, what part of the
database will receive writes, etc.


So there's nothing inherently wrong with (shared_buffers > max_wal_size)
or (shared_buffers > max_wal_size), it depends on your workload.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why can't the database owner create schemas and how can I enable that?

2017-09-23 Thread Melvin Davidson
On Sat, Sep 23, 2017 at 2:33 AM, John R Pierce  wrote:

> On 9/22/2017 10:29 PM, Tim Uckun wrote:
>
>> I am setting up a database for an app. So I create a user without
>> superuser but with login priveleges
>>
>> I then create a database and set it's owner to that user like this...
>>
>> dropdb --if-exists api_development
>> dropuser --if-exists api_user
>>
>> createuser api_user -P -d
>> createdb api_development -O api_user
>>
>> The app can now connect to the database but it can't create any tables,
>> schemas or anything else unless I give it superuser privileges.
>>
>> Is there any way I can make this user a superuser for this database
>> without making it a superuser on other databases?
>>
>
>
> that should have worked just fine.
>
>
> [root@new ~]# useradd fred
> [root@new ~]# su - postgres
> $ createuser fred
> $ createdb fred -O fred
> $ logout
> [root@new ~]# su - fred
> [fred@new ~]$ psql
> psql (9.3.19)
> Type "help" for help.
>
> fred=> create schema xyzzy;
> CREATE SCHEMA
> fred=> create table xyzzy.abc (id serial, dat text);
> CREATE TABLE
> fred=> \q
>
> .
>
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Make sure you are the DB owner when you connect.

Either
psql -U api_user  -d api_development

OR
psql -d api_development

SET ROLE api_user;
SELECT current_user;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Adding Cyrillic support

2017-09-23 Thread John R Pierce

On 9/23/2017 1:44 AM, Job wrote:

how can i add more character support to PostgreSql 9.6?
I need to also store some information in Cyrillic, for Russian users.

Can more characters coexist in the same database?



utf-8 should be able to store just about any character.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding Cyrillic support

2017-09-23 Thread Job
Dear guys,

how can i add more character support to PostgreSql 9.6?
I need to also store some information in Cyrillic, for Russian users.

Can more characters coexist in the same database?

Thank you!
F



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why can't the database owner create schemas and how can I enable that?

2017-09-23 Thread John R Pierce

On 9/22/2017 10:29 PM, Tim Uckun wrote:
I am setting up a database for an app. So I create a user without 
superuser but with login priveleges


I then create a database and set it's owner to that user like this...

dropdb --if-exists api_development
dropuser --if-exists api_user

createuser api_user -P -d
createdb api_development -O api_user

The app can now connect to the database but it can't create any 
tables, schemas or anything else unless I give it superuser privileges.


Is there any way I can make this user a superuser for this database 
without making it a superuser on other databases?



that should have worked just fine.


[root@new ~]# useradd fred
[root@new ~]# su - postgres
$ createuser fred
$ createdb fred -O fred
$ logout
[root@new ~]# su - fred
[fred@new ~]$ psql
psql (9.3.19)
Type "help" for help.

fred=> create schema xyzzy;
CREATE SCHEMA
fred=> create table xyzzy.abc (id serial, dat text);
CREATE TABLE
fred=> \q

.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] shared_buffers smaller than max_wal_size

2017-09-23 Thread Vladimir Mihailenco
Hi,

I wonder what is the point of setting max WAL size bigger than shared
buffers, e.g.

shared_buffers = 512mb
max_wal_size = 2gb

As I understand a checkpoint happens after 2gb of data were modified
(writter to WAL), but shared buffers can contain at most 512mb of dirty
pages to be flushed to the disk. Is it still a win or I am missing
something?