Re: [PERFORM] Best design for performance

2017-03-28 Thread Riaan Stander

On 2017-03-28 07:15 PM, Claudio Freire wrote:

On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander  wrote:

CREATE TABLE [dbo].[usrUserRights]  (
  [UserId]   [dbo].[dm_Id] NOT NULL,
  [SiteId]   [dbo].[dm_Id] NOT NULL,
  [RightId]  [dbo].[dm_Id] NOT NULL,
  CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY
CLUSTERED([UserId],[RightId],[SiteId])
);

Takes 23GB for data and 200MB for indexes.

Postgres table with over 700mil records:

CREATE TABLE security.user_right_site2
(
 user_id bigint NOT NULL,
 right_id bigint NOT NULL,
 site_id bigint NOT NULL
);
create index on security.user_right_site2(user_id, right_id);

Takes 35GB data and 26GB index, for a total of 61GB.

That is quite a large increase over SQL Server storage. Am I missing
something? Makes me worry about the rest of the database we still have to
convert.

Indexes are quite fat in postgres, especially if you index all
columns. To make the difference even bigger, it seems like there is
very hardcore compression going on in SQL Server, for that index to be
only 200MB. Are you sure you measured it correctly?

In any case, yes, indexes will be fatter in postgres. Their
performance shouldn't suffer considerably, though, given enough RAM.


That 200Mb is for another index on that table. Due to the table being 
clustered on those 3 columns SQL Server sees the clustered index as the 
table storage.




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Best design for performance

2017-03-28 Thread Claudio Freire
On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander  wrote:
> CREATE TABLE [dbo].[usrUserRights]  (
>  [UserId]   [dbo].[dm_Id] NOT NULL,
>  [SiteId]   [dbo].[dm_Id] NOT NULL,
>  [RightId]  [dbo].[dm_Id] NOT NULL,
>  CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY
> CLUSTERED([UserId],[RightId],[SiteId])
> );
>
> Takes 23GB for data and 200MB for indexes.
>
> Postgres table with over 700mil records:
>
> CREATE TABLE security.user_right_site2
> (
> user_id bigint NOT NULL,
> right_id bigint NOT NULL,
> site_id bigint NOT NULL
> );
> create index on security.user_right_site2(user_id, right_id);
>
> Takes 35GB data and 26GB index, for a total of 61GB.
>
> That is quite a large increase over SQL Server storage. Am I missing
> something? Makes me worry about the rest of the database we still have to
> convert.

Indexes are quite fat in postgres, especially if you index all
columns. To make the difference even bigger, it seems like there is
very hardcore compression going on in SQL Server, for that index to be
only 200MB. Are you sure you measured it correctly?

In any case, yes, indexes will be fatter in postgres. Their
performance shouldn't suffer considerably, though, given enough RAM.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Best design for performance

2017-03-28 Thread Riaan Stander

On 28 Mar 2017 4:22 AM, Claudio Freire wrote:

From: Claudio Freire [mailto:klaussfre...@gmail.com]

How did you query the table's size? You're probably failing to account for 
TOAST tables.

I'd suggest using pg_total_relation_size.

...
On Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander  wrote:

I'm using the first query from here.
https://wiki.postgresql.org/wiki/Disk_Usage

Please don't top post.

It's a surprisingly big difference. TOAST could be compressing the
array, but I wouldn't expect it to be that compressible. Do you have
any stats about the length of the site array per row?


The plan is to do the rights checking in the application. The join solution gets 
used for reports to filter data & client adhoc queries.

Especially for reporting queries, you want the planner's stats to be
as accurate as possible, and placing a literal sites arrays in the
query in my experience is the best way to achieve that. But that is
indeed limited to reasonably small arrays, thereby the need to have
both variants to adapt the query to each case.

If you can't afford to do that change at the application level, I
would expect that the original schema without the array should be
superior. The array hides useful information from the planner, and
that *should* hurt you.

You'll have to test with a reasonably large data set, resembling a
production data set as much as possible.


I did some more testing on this. My primary concern that not all the 
data was there in the array version,but after doing some extensive 
testing all seems to be there.


I've done some comparisons vs the SQL Server version too.
SQL Sever Table with over 700mil records:

CREATE TABLE [dbo].[usrUserRights]  (
 [UserId]   [dbo].[dm_Id] NOT NULL,
 [SiteId]   [dbo].[dm_Id] NOT NULL,
 [RightId]  [dbo].[dm_Id] NOT NULL,
 CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY 
CLUSTERED([UserId],[RightId],[SiteId])

);

Takes23GB for data and 200MB for indexes.

Postgres table with over 700mil records:

CREATE TABLE security.user_right_site2
(
user_id bigint NOT NULL,
right_id bigint NOT NULL,
site_id bigint NOT NULL
);
create index on security.user_right_site2(user_id, right_id);

Takes 35GB data and 26GB index, for a total of 61GB.

That is quite a large increase over SQL Server storage. Am I missing 
something? Makes me worry about the rest of the database we still have 
to convert.


Postgres Array version ends up with only 600k records, due to aggregation:
CREATE TABLE security.user_right_site
(
user_id bigint NOT NULL,
right_id bigint NOT NULL,
sites bigint[]
);
create index on security.user_right_site(user_id, right_id);

Takes 339Mb data, 25Mb index and 2240Mb TOAST

Regarding the Array length for each of these. They currently have max 
6500 site ids.


Regards
Riaan


Re: [PERFORM] Best design for performance

2017-03-27 Thread Claudio Freire
> From: Claudio Freire [mailto:klaussfre...@gmail.com]
>
> How did you query the table's size? You're probably failing to account for 
> TOAST tables.
>
> I'd suggest using pg_total_relation_size.
...
On Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander  wrote:
> I'm using the first query from here.
> https://wiki.postgresql.org/wiki/Disk_Usage

Please don't top post.

It's a surprisingly big difference. TOAST could be compressing the
array, but I wouldn't expect it to be that compressible. Do you have
any stats about the length of the site array per row?

> The plan is to do the rights checking in the application. The join solution 
> gets used for reports to filter data & client adhoc queries.

Especially for reporting queries, you want the planner's stats to be
as accurate as possible, and placing a literal sites arrays in the
query in my experience is the best way to achieve that. But that is
indeed limited to reasonably small arrays, thereby the need to have
both variants to adapt the query to each case.

If you can't afford to do that change at the application level, I
would expect that the original schema without the array should be
superior. The array hides useful information from the planner, and
that *should* hurt you.

You'll have to test with a reasonably large data set, resembling a
production data set as much as possible.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Best design for performance

2017-03-27 Thread Riaan Stander
I'm using the first query from here.
https://wiki.postgresql.org/wiki/Disk_Usage
It does seem to include toast data.

The plan is to do the rights checking in the application. The join solution 
gets used for reports to filter data & client adhoc queries.

-Original Message-
From: Claudio Freire [mailto:klaussfre...@gmail.com] 
Sent: 28 March 2017 02:42 AM
To: Riaan Stander <rstan...@exa.co.za>
Cc: postgres performance list <pgsql-performance@postgresql.org>
Subject: Re: [PERFORM] Best design for performance

On Mon, Mar 27, 2017 at 8:43 PM, Riaan Stander <rstan...@exa.co.za> wrote:
> In Postgres I was thinking of going with a design like this
>
> CREATE TABLE security.user_right_site
> (
> user_id bigint NOT NULL,
> right_id bigint NOT NULL,
> sites bigint[]
> );
> create index on security.user_right_site(user_id, right_id);
>
>
>
> This drastically cut down on the number of records in the table. It 
> also seems to make a massive change to the storage requirements.
>
> The old design requires 61GB vs 2.6GB.

How did you query the table's size? You're probably failing to account for 
TOAST tables.

I'd suggest using pg_total_relation_size.

> My one concern is regarding the limitations of the array type in Postgres.
> Is there a point at which one should not use it? Currently our largest 
> client has 6000+ sites, meaning that the array would contain that many 
> items. What would the upper feasible limit be in Postgres?

In that design, rows with a large number of sites would probably end up 
TOASTing the sites array.

That will make access to that array a tad slower, but it would probably be OK 
for your use case, since you'll ever just read one such row per query. You'll 
have to test to be sure.

The limit on that design is about 128M items on sites, IIRC (ie: the maximum 
size of values is 1GB, so an array of 128M bigints is above that limit). You'll 
probably have issues much earlier than that. For instance, a filter of the form 
"site_id = ANY(sites)" with that many entries would probably be unusably slow.

Personally, I would go for fetching the sites array on the application side, 
and using site_id = ANY(ARRAY[...]) if small enough, and a subselect if the 
array is too big. That would let the planner be smarter, since it'll have the 
literal array list at planning time and will be able to fetch accurate stats, 
and choose an optimal plan based on data skew.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Best design for performance

2017-03-27 Thread Claudio Freire
On Mon, Mar 27, 2017 at 8:43 PM, Riaan Stander  wrote:
> In Postgres I was thinking of going with a design like this
>
> CREATE TABLE security.user_right_site
> (
> user_id bigint NOT NULL,
> right_id bigint NOT NULL,
> sites bigint[]
> );
> create index on security.user_right_site(user_id, right_id);
>
>
>
> This drastically cut down on the number of records in the table. It also
> seems to make a massive change to the storage requirements.
>
> The old design requires 61GB vs 2.6GB.

How did you query the table's size? You're probably failing to account
for TOAST tables.

I'd suggest using pg_total_relation_size.

> My one concern is regarding the limitations of the array type in Postgres.
> Is there a point at which one should not use it? Currently our largest
> client has 6000+ sites, meaning that the array would contain that many
> items. What would the upper feasible limit be in Postgres?

In that design, rows with a large number of sites would probably end
up TOASTing the sites array.

That will make access to that array a tad slower, but it would
probably be OK for your use case, since you'll ever just read one such
row per query. You'll have to test to be sure.

The limit on that design is about 128M items on sites, IIRC (ie: the
maximum size of values is 1GB, so an array of 128M bigints is above
that limit). You'll probably have issues much earlier than that. For
instance, a filter of the form "site_id = ANY(sites)" with that many
entries would probably be unusably slow.

Personally, I would go for fetching the sites array on the application
side, and using site_id = ANY(ARRAY[...]) if small enough, and a
subselect if the array is too big. That would let the planner be
smarter, since it'll have the literal array list at planning time and
will be able to fetch accurate stats, and choose an optimal plan based
on data skew.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance