Re: [SPAM] Re: [PERFORM] Architectural question

2016-03-24 Thread Moreno Andreo

Il 23/03/2016 19:51, Jim Nasby ha scritto:

On 3/23/16 4:14 AM, Moreno Andreo wrote:

The main goal is to be *quick*. A doctor with a patient on the other
side of his desk does not want to wait, say, 30 seconds for a clinical
record to open.
Let me explain what is the main problem (actually there are 2 problems).
1. I'm handling health data, and sometines they store large images (say
an hi-res image of an x-ray). When their team mates (spread all over the
city, not in the same building) ask for that bitmap (that is, 20
megabytes), surely it can't be cached (images are loaded only if
requested by user) and searching a 35k rows, 22 GB table for the
matching image should not be that fast, even with proper indexing
(patient record number)


Why wouldn't that be fast? Unless the TOAST table for that particular 
table is pretty fragmented, 
I'm running on Debian with ext4 file system. I'm not expecting 
fragmentation. Am I wrong?
pulling up thumbnails should be very fast. I'd expect it to be the 
cost of reading a few pages sequentially.
I'm not extracting thumbnails. I have a layout that is similar to an 
email client, with all rows with data and, in a column, a clip, that 
lets user to load the real image, not its thumbnail.


If you're mixing all your blobs together, then you might end up with a 
problem. It might be worth partitioning the blob table based on the 
size of what you're storing.
OK, I went to documentation and read about partitioning :-) I knew about 
inheritance, but I was totally unaware of partitioning. Today it's a 
good day, because I've learned something new.
You're saying that it would be better creating, for example, a table for 
blobs < 1 MB, another for blobs between 1 and 5 MB and another for blobs 
> 5 MB? And what about the master table? Should it be one of these three?

Blobs data and size are unpredictable (from 2k RTF to 20 MB JPG),



2. When I load patient list, their photo must be loaded as well, because
when I click on the table row, a small preview is shown (including a
small thumbnail of the patient's photo). Obviously I can't load all
thumbs while loading the whole patient list (the list can be up to
4-5000 records and photo size is about 4-500kBytes, so it would be an
enormous piece of data to be downloaded.


I would think a thumbnail would be 30-40k or less, not 500k. 
You have a point. We adviced of that the users, but they don't care, or 
simply don't know what they are doing. We need to change the application 
to accept max 50k files.
It sounds like part of the problem is you should keep the thumbnails 
separate from the high-res file. But really you should probably do 
that for everything... I suspect there's parts of the UI when you want 
to display a fairly low-res version of something like an xray, only 
pulling the raw image if someone actually needs it.
That's what we are doing. thumbnails are only patient portraits, while 
no other blob (clinical scans) is read until someone asks for it


Thanks
Moreno.




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [SPAM] Re: [PERFORM] Architectural question

2016-03-23 Thread Jim Nasby

On 3/23/16 4:14 AM, Moreno Andreo wrote:

The main goal is to be *quick*. A doctor with a patient on the other
side of his desk does not want to wait, say, 30 seconds for a clinical
record to open.
Let me explain what is the main problem (actually there are 2 problems).
1. I'm handling health data, and sometines they store large images (say
an hi-res image of an x-ray). When their team mates (spread all over the
city, not in the same building) ask for that bitmap (that is, 20
megabytes), surely it can't be cached (images are loaded only if
requested by user) and searching a 35k rows, 22 GB table for the
matching image should not be that fast, even with proper indexing
(patient record number)


Why wouldn't that be fast? Unless the TOAST table for that particular 
table is pretty fragmented, pulling up thumbnails should be very fast. 
I'd expect it to be the cost of reading a few pages sequentially.


If you're mixing all your blobs together, then you might end up with a 
problem. It might be worth partitioning the blob table based on the size 
of what you're storing.



2. When I load patient list, their photo must be loaded as well, because
when I click on the table row, a small preview is shown (including a
small thumbnail of the patient's photo). Obviously I can't load all
thumbs while loading the whole patient list (the list can be up to
4-5000 records and photo size is about 4-500kBytes, so it would be an
enormous piece of data to be downloaded.


I would think a thumbnail would be 30-40k or less, not 500k. It sounds 
like part of the problem is you should keep the thumbnails separate from 
the high-res file. But really you should probably do that for 
everything... I suspect there's parts of the UI when you want to display 
a fairly low-res version of something like an xray, only pulling the raw 
image if someone actually needs it.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Architectural question

2016-03-23 Thread Moreno Andreo

Il 23/03/2016 13:29, Mike Sofen ha scritto:

-Original Message-
Thomas Kellerer Wednesday, March 23, 2016 2:51 AM

Jim Nasby schrieb am 11.03.2016 um 17:37:

If the blob is in the database then you have nothing extra to do. It's handled

just like all your other data.

If it's a file in a file system then you need to:

- Have application code that knows how and where to get at the file
- Have a way to make those files available on all your webservers
- Have completely separate backup and recovery plans for those files

That's a lot of extra work. Sometimes it's necessary, but many times it's not.

Don't forget the code you need to write to properly handle transactional access
(writing, deleting) to the files

You usually also need to distribute the files over many directories.
Having millions of files in a single directory is usually not such a good idea.

In my experience you also need some cleanup job that removes orphaned files
from the file system.
Because no matter how hard you try, to get updates/writes to the file system
right, at some point this fails.

Also from a security point of view having this in the database is more robust
then in the file system.

The downside of bytea is that you can't stream them to the client. The
application always needs to read the whole blob into memory before it can be
used. This might put some memory pressure on the application server.

Thomas

This is really an excellent conversation, and highlights the never-ending 
contemplation
of blob storage.

That seems like discussing about politics or religion :-)

I've had to go through this dialog in two different industries - healthcare
and now genomics, creating a new EMR (electronic medical record) system and 
storing
and manipulating huge genomic data sets.

I have, in both cases, ended up leaving the blob-type data outside of the 
database.  Even
though, as Thomas mentioned, it requires more database and app code to manage, 
it
ends up allowing for both systems to be optimized for their respective duties.
Our approach, still mantaining BLOBs in databases, is quite an hybrid, 
because BLOBs are not spread among DB tables, but we have a dedicated 
table, with an appropriate indexing, where 95% of our blobs (and 99% of 
blob storage) reside, so if we need to have a quick dump, we can exclude 
BLOBs table or treat it in a separate way (i.e. backup util in our app 
is made of two separate steps, clinical data and blobs).


As I wrote in a previous post, we have our blobs encrypted, so it's more 
handy keeping them in DB rather than saving to a file (and, I think, 
quicker when the user request for any of these)

In addition, the vastly smaller database sizes result in far faster backups and 
restores,
transactional replication maintains it's speed, and in general, I find the 
fault tolerant
behaviors to be excellent.

Yes, losing track of a file would be very bad, and...we're only storing things 
like xray photos
or ct scans (healthcare), or genomic processing results.  In both cases, 
usually, the results
can be recreated.  That said, I've never lost a file so haven't needed to pull 
on that lever.
In our case we have to assume that blob contents cannot be recreated. 
Patients can change family doctor... if a trial arise and a critical 
document is lost, he's on his own. That's why we have a daily-based 
automatic backup policy on the customer local server.

My latest model is placing large genomic data onto the AWS S3 file system, 
keeping all of
the metadata inside the database.  It's working very well so far, but we're 
still in development.

Mike








--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Architectural question

2016-03-23 Thread Rick Otten
I have another suggestion.  How about putting the images in RethinkDB?

RethinkDB is easy to set up and manage, and is scalable and easy (almost
trivial) to cluster.  Many of the filesystem disadvantages you mention
would be much more easily managed by RethinkDB.

A while back I wrote a Foreign Data Wrapper for RethinkDB.  I haven't
updated it to the latest version, but it wouldn't be hard to bring it up to
date.  (It might even work as-is.)   By leveraging the FDW, you could have
all of the awesome Relational Power and performance of PostgreSQL combined
with the scalable, easily clustered, NoSQL powers of RethinkDB, yet still
have a common interface - if you need it.



On Wed, Mar 23, 2016 at 8:29 AM, Mike Sofen  wrote:

> > -Original Message-
> > Thomas Kellerer Wednesday, March 23, 2016 2:51 AM
> >
> > Jim Nasby schrieb am 11.03.2016 um 17:37:
> > > If the blob is in the database then you have nothing extra to do. It's
> handled
> > just like all your other data.
> > >
> > > If it's a file in a file system then you need to:
> > >
> > > - Have application code that knows how and where to get at the file
> > > - Have a way to make those files available on all your webservers
> > > - Have completely separate backup and recovery plans for those files
> > >
> > > That's a lot of extra work. Sometimes it's necessary, but many times
> it's not.
> >
> > Don't forget the code you need to write to properly handle transactional
> access
> > (writing, deleting) to the files
> >
> > You usually also need to distribute the files over many directories.
> > Having millions of files in a single directory is usually not such a
> good idea.
> >
> > In my experience you also need some cleanup job that removes orphaned
> files
> > from the file system.
> > Because no matter how hard you try, to get updates/writes to the file
> system
> > right, at some point this fails.
> >
> > Also from a security point of view having this in the database is more
> robust
> > then in the file system.
> >
> > The downside of bytea is that you can't stream them to the client. The
> > application always needs to read the whole blob into memory before it
> can be
> > used. This might put some memory pressure on the application server.
> >
> > Thomas
>
> This is really an excellent conversation, and highlights the never-ending
> contemplation
> of blob storage.  I've had to go through this dialog in two different
> industries - healthcare
> and now genomics, creating a new EMR (electronic medical record) system
> and storing
> and manipulating huge genomic data sets.
>
> I have, in both cases, ended up leaving the blob-type data outside of the
> database.  Even
> though, as Thomas mentioned, it requires more database and app code to
> manage, it
> ends up allowing for both systems to be optimized for their respective
> duties.
>
> In addition, the vastly smaller database sizes result in far faster
> backups and restores,
> transactional replication maintains it's speed, and in general, I find the
> fault tolerant
> behaviors to be excellent.
>
> Yes, losing track of a file would be very bad, and...we're only storing
> things like xray photos
> or ct scans (healthcare), or genomic processing results.  In both cases,
> usually, the results
> can be recreated.  That said, I've never lost a file so haven't needed to
> pull on that lever.
>
> My latest model is placing large genomic data onto the AWS S3 file system,
> keeping all of
> the metadata inside the database.  It's working very well so far, but
> we're still in development.
>
> Mike
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


Re: [PERFORM] Architectural question

2016-03-23 Thread Mike Sofen
> -Original Message-
> Thomas Kellerer Wednesday, March 23, 2016 2:51 AM
> 
> Jim Nasby schrieb am 11.03.2016 um 17:37:
> > If the blob is in the database then you have nothing extra to do. It's 
> > handled
> just like all your other data.
> >
> > If it's a file in a file system then you need to:
> >
> > - Have application code that knows how and where to get at the file
> > - Have a way to make those files available on all your webservers
> > - Have completely separate backup and recovery plans for those files
> >
> > That's a lot of extra work. Sometimes it's necessary, but many times it's 
> > not.
> 
> Don't forget the code you need to write to properly handle transactional 
> access
> (writing, deleting) to the files
> 
> You usually also need to distribute the files over many directories.
> Having millions of files in a single directory is usually not such a good 
> idea.
> 
> In my experience you also need some cleanup job that removes orphaned files
> from the file system.
> Because no matter how hard you try, to get updates/writes to the file system
> right, at some point this fails.
> 
> Also from a security point of view having this in the database is more robust
> then in the file system.
> 
> The downside of bytea is that you can't stream them to the client. The
> application always needs to read the whole blob into memory before it can be
> used. This might put some memory pressure on the application server.
> 
> Thomas

This is really an excellent conversation, and highlights the never-ending 
contemplation
of blob storage.  I've had to go through this dialog in two different 
industries - healthcare
and now genomics, creating a new EMR (electronic medical record) system and 
storing
and manipulating huge genomic data sets.

I have, in both cases, ended up leaving the blob-type data outside of the 
database.  Even
though, as Thomas mentioned, it requires more database and app code to manage, 
it
ends up allowing for both systems to be optimized for their respective duties.

In addition, the vastly smaller database sizes result in far faster backups and 
restores, 
transactional replication maintains it's speed, and in general, I find the 
fault tolerant
behaviors to be excellent.  

Yes, losing track of a file would be very bad, and...we're only storing things 
like xray photos
or ct scans (healthcare), or genomic processing results.  In both cases, 
usually, the results
can be recreated.  That said, I've never lost a file so haven't needed to pull 
on that lever.

My latest model is placing large genomic data onto the AWS S3 file system, 
keeping all of
the metadata inside the database.  It's working very well so far, but we're 
still in development.

Mike



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Architectural question

2016-03-23 Thread Moreno Andreo

Il 23/03/2016 10:50, Thomas Kellerer ha scritto:

Jim Nasby schrieb am 11.03.2016 um 17:37:

If the blob is in the database then you have nothing extra to do. It's handled 
just like all your other data.

If it's a file in a file system then you need to:

- Have application code that knows how and where to get at the file
- Have a way to make those files available on all your webservers
- Have completely separate backup and recovery plans for those files

That's a lot of extra work. Sometimes it's necessary, but many times it's not.

Don't forget the code you need to write to properly handle transactional access 
(writing, deleting) to the files

You usually also need to distribute the files over many directories.
Having millions of files in a single directory is usually not such a good idea.

In my experience you also need some cleanup job that removes orphaned files 
from the file system.
Because no matter how hard you try, to get updates/writes to the file system 
right, at some point this fails.

Also from a security point of view having this in the database is more robust 
then in the file system.

The downside of bytea is that you can't stream them to the client. The 
application always needs to read the whole blob into memory before it can be 
used. This might put some memory pressure on the application server.

Thomas





I just wrote about it in my last message that I sent a few minutes ago
We have blobs in a reserved table in each customer database, so we can 
keep up with privacy, since every blob is encrypted... so no extra work :-)


Thanks
Moreno.-




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Architectural question

2016-03-23 Thread Thomas Kellerer
Jim Nasby schrieb am 11.03.2016 um 17:37:
> If the blob is in the database then you have nothing extra to do. It's 
> handled just like all your other data.
> 
> If it's a file in a file system then you need to:
> 
> - Have application code that knows how and where to get at the file
> - Have a way to make those files available on all your webservers
> - Have completely separate backup and recovery plans for those files
> 
> That's a lot of extra work. Sometimes it's necessary, but many times it's not.

Don't forget the code you need to write to properly handle transactional access 
(writing, deleting) to the files

You usually also need to distribute the files over many directories. 
Having millions of files in a single directory is usually not such a good idea. 

In my experience you also need some cleanup job that removes orphaned files 
from the file system. 
Because no matter how hard you try, to get updates/writes to the file system 
right, at some point this fails.

Also from a security point of view having this in the database is more robust 
then in the file system.

The downside of bytea is that you can't stream them to the client. The 
application always needs to read the whole blob into memory before it can be 
used. This might put some memory pressure on the application server. 

Thomas




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [SPAM] Re: [PERFORM] Architectural question

2016-03-23 Thread Moreno Andreo

Il 11/03/2016 17:37, Jim Nasby ha scritto:

On 2/22/16 8:40 AM, Moreno Andreo wrote:

Il 18/02/2016 21:33, Jim Nasby ha scritto:

Depending on your needs, could could use synchronous replication as
part of that setup. You can even do that at a per-transaction level,
so maybe you use sync rep most of the time, and just turn it off when
inserting or updating BLOBS.

This sounds good, and when everything is OK we have I/O operation split
across the two servers; a small delay in synchronizing blobs should not
be a big deal, even if something bad happens (because of XLOG), right?


It all depends on what you can tolerate. You also don't have to use 
synchronous replication; normal streaming replication is async, so if 
you can stand to lose some data if one of the servers dies then you 
can do that.
I can't tolerate data loss, so synchronous replication is mandatory (I 
had a case this week of a customer asking for an old document that I 
couldn't find in the database, either if the "attach present" flag was 
true... and I had a bit of a hard time trying to convince the customer 
it was his fault... :-) )


Last thing: should blobs (or the whole database directory itself) 
go in

a different partition, to optimize performance, or in VM environment
this is not a concern anymore?


First: IMO concerns about blobs in the database are almost always
overblown.

In many places I've been they say, at last, "BLOBs are slow". So I
considered this as another point to analyze while designing server
architecture. If you say "don't mind", then I won't.


It all depends. They're certainly a lot slower than handling a single 
int, but in many cases the difference just doesn't matter.
The main goal is to be *quick*. A doctor with a patient on the other 
side of his desk does not want to wait, say, 30 seconds for a clinical 
record to open.

Let me explain what is the main problem (actually there are 2 problems).
1. I'm handling health data, and sometines they store large images (say 
an hi-res image of an x-ray). When their team mates (spread all over the 
city, not in the same building) ask for that bitmap (that is, 20 
megabytes), surely it can't be cached (images are loaded only if 
requested by user) and searching a 35k rows, 22 GB table for the 
matching image should not be that fast, even with proper indexing 
(patient record number)
2. When I load patient list, their photo must be loaded as well, because 
when I click on the table row, a small preview is shown (including a 
small thumbnail of the patient's photo). Obviously I can't load all 
thumbs while loading the whole patient list (the list can be up to 
4-5000 records and photo size is about 4-500kBytes, so it would be an 
enormous piece of data to be downloaded.



30GB of blobs on modern hardware really isn't a big deal, and there's
a *lot* to be said for not having to write the extra code to manage
all that by hand.

What do you mean? Extra code?


If the blob is in the database then you have nothing extra to do. It's 
handled just like all your other data.


If it's a file in a file system then you need to:

- Have application code that knows how and where to get at the file
- Have a way to make those files available on all your webservers
- Have completely separate backup and recovery plans for those files

That's a lot of extra work. Sometimes it's necessary, but many times 
it's not.
In my case I think it's not necessary, since all blobs go into a bytea 
field in a table that's just for them. It's an approach that helps us 
keeping up with privacy, since all blobs are encrypted, and can be 
accessed only by application.


When it comes to your disk layout, the first things I'd look at 
would be:


- Move the temporary statistics directory to a RAM disk
- Move pg_xlog to it's own partition

So I need another vDisk, not that big, for pg_xlog?


Yeah, but note that with virtualization that may or may not help.
I was afraid of that. With virtualization we are bound to that hardware 
lying behind us, and that we can't see nor control. Even if we create 2 
vDisk, they should be bound to the same host spindles, and so having two 
vDisk is completely useless.

I'm thinking of increase checkpoint_segments interval, so
In the next two week I should have the VM deployed, so I'll see what 
I'll have in terms of speed and response (looking at the amount we are 
paying, I hope it will be a very FAST machine... :-D)


Thanks
Moreno.-




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [SPAM] Re: [PERFORM] Architectural question

2016-03-11 Thread Jim Nasby

On 2/22/16 8:40 AM, Moreno Andreo wrote:

Il 18/02/2016 21:33, Jim Nasby ha scritto:

Depending on your needs, could could use synchronous replication as
part of that setup. You can even do that at a per-transaction level,
so maybe you use sync rep most of the time, and just turn it off when
inserting or updating BLOBS.

This sounds good, and when everything is OK we have I/O operation split
across the two servers; a small delay in synchronizing blobs should not
be a big deal, even if something bad happens (because of XLOG), right?


It all depends on what you can tolerate. You also don't have to use 
synchronous replication; normal streaming replication is async, so if 
you can stand to lose some data if one of the servers dies then you can 
do that.



Last thing: should blobs (or the whole database directory itself) go in
a different partition, to optimize performance, or in VM environment
this is not a concern anymore?


First: IMO concerns about blobs in the database are almost always
overblown.

In many places I've been they say, at last, "BLOBs are slow". So I
considered this as another point to analyze while designing server
architecture. If you say "don't mind", then I won't.


It all depends. They're certainly a lot slower than handling a single 
int, but in many cases the difference just doesn't matter.



30GB of blobs on modern hardware really isn't a big deal, and there's
a *lot* to be said for not having to write the extra code to manage
all that by hand.

What do you mean? Extra code?


If the blob is in the database then you have nothing extra to do. It's 
handled just like all your other data.


If it's a file in a file system then you need to:

- Have application code that knows how and where to get at the file
- Have a way to make those files available on all your webservers
- Have completely separate backup and recovery plans for those files

That's a lot of extra work. Sometimes it's necessary, but many times 
it's not.



When it comes to your disk layout, the first things I'd look at would be:

- Move the temporary statistics directory to a RAM disk
- Move pg_xlog to it's own partition

So I need another vDisk, not that big, for pg_xlog?


Yeah, but note that with virtualization that may or may not help.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [SPAM] Re: [PERFORM] Architectural question

2016-02-22 Thread Moreno Andreo

Il 18/02/2016 21:33, Jim Nasby ha scritto:

Just before we go on, I have to say that I'm working on PostgreSQL for 
about 10 years now, but while in the past "leave everything as it is" 
worked, in the last 15 months I began to research and study how to 
improve my server performance, so I'm quite a bit of novice in being a 
DBA (but a novice that when needed reads a lot of documentation :-) )
So, if some questions may sound "strange", "noobish" to you, that's the 
reason.



On 2/11/16 12:06 PM, Moreno Andreo wrote:

Now, the actual question, is:
Having a VM that can be upgraded with a click on a panel and a reboot,
and that the server fault is only related to a OS failure, should I keep
a single-server solution (but I fear that I/O throughput will become
even more inadequate) or is it convenient to migrate in a 2-server
system? And, in case of 2-server configuration, what would you 
recommend?


Much of that depends on your disaster recovery strategy.
I'm planning to have a cron job that backups data (only data) overnight 
(I was thinking something like pg_dumpall) and takes a snapshot of the 
whole server over the weekend (If I'm not wrong, VMWare allows live 
snapshots), so if something bad happens, I'll recover the snapshot from 
last save and restore all databases from latest backup.





Scenario 1:
Given 350 databases, I split them in 2, 175 on server 1 and 175 on
server 2, having pgBouncer to resolve the connections and each server
has its own workload

Scenario 2:
Server 1 -> Master, Server 2 -> Slave (Replicated with Slony or...?),
Server 1 for writes, Server 2 for reads


Personally I'd do kind of a hybrid at this point.

First, I'd split the masters across both servers, with a way to easily 
fail over if one of the servers dies.


Next, I'd get streaming replication setup so that the half with 
masters on A have replicas on B and vice-versa. That way you can 
easily recover from one server or the other failing.


Depending on your needs, could could use synchronous replication as 
part of that setup. You can even do that at a per-transaction level, 
so maybe you use sync rep most of the time, and just turn it off when 
inserting or updating BLOBS.
This sounds good, and when everything is OK we have I/O operation split 
across the two servers; a small delay in synchronizing blobs should not 
be a big deal, even if something bad happens (because of XLOG), right?





Last thing: should blobs (or the whole database directory itself) go in
a different partition, to optimize performance, or in VM environment
this is not a concern anymore?


First: IMO concerns about blobs in the database are almost always 
overblown. 
In many places I've been they say, at last, "BLOBs are slow". So I 
considered this as another point to analyze while designing server 
architecture. If you say "don't mind", then I won't.


30GB of blobs on modern hardware really isn't a big deal, and there's 
a *lot* to be said for not having to write the extra code to manage 
all that by hand.

What do you mean? Extra code?



When it comes to your disk layout, the first things I'd look at would be:

- Move the temporary statistics directory to a RAM disk
- Move pg_xlog to it's own partition

So I need another vDisk, not that big, for pg_xlog?

Those don't always help, but frequently they do. And when they do, it 
usually makes a big difference.


Beyond that, there might be some advantage to putting blobs on their 
own tablespace. Hard to say without trying it.
I'm thinking about it, because while the most of the blobs are < 1MB, 
there are some that reach 20, 50 and even 100 megabytes, and I'm quite 
concerned in overall performance of the whole system (even if it's on 
modern hardware, 100 megs to extract are not that fast...) when these 
have to be sent to whom is requesting them...


So, my ideas are clearer now, but the first step is to decide if there's 
need for only one server (my budget will be happier, because they seem 
very good, but quite expensive, at GCP...) or it's best with two, using 
pgBouncer, and where to put pgBouncer... :-)


Thanks
Moreno




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Architectural question

2016-02-18 Thread Jim Nasby

On 2/11/16 12:06 PM, Moreno Andreo wrote:

Now, the actual question, is:
Having a VM that can be upgraded with a click on a panel and a reboot,
and that the server fault is only related to a OS failure, should I keep
a single-server solution (but I fear that I/O throughput will become
even more inadequate) or is it convenient to migrate in a 2-server
system? And, in case of 2-server configuration, what would you recommend?


Much of that depends on your disaster recovery strategy.


Scenario 1:
Given 350 databases, I split them in 2, 175 on server 1 and 175 on
server 2, having pgBouncer to resolve the connections and each server
has its own workload

Scenario 2:
Server 1 -> Master, Server 2 -> Slave (Replicated with Slony or...?),
Server 1 for writes, Server 2 for reads


Personally I'd do kind of a hybrid at this point.

First, I'd split the masters across both servers, with a way to easily 
fail over if one of the servers dies.


Next, I'd get streaming replication setup so that the half with masters 
on A have replicas on B and vice-versa. That way you can easily recover 
from one server or the other failing.


Depending on your needs, could could use synchronous replication as part 
of that setup. You can even do that at a per-transaction level, so maybe 
you use sync rep most of the time, and just turn it off when inserting 
or updating BLOBS.



Last thing: should blobs (or the whole database directory itself) go in
a different partition, to optimize performance, or in VM environment
this is not a concern anymore?


First: IMO concerns about blobs in the database are almost always 
overblown. 30GB of blobs on modern hardware really isn't a big deal, and 
there's a *lot* to be said for not having to write the extra code to 
manage all that by hand.


When it comes to your disk layout, the first things I'd look at would be:

- Move the temporary statistics directory to a RAM disk
- Move pg_xlog to it's own partition

Those don't always help, but frequently they do. And when they do, it 
usually makes a big difference.


Beyond that, there might be some advantage to putting blobs on their own 
tablespace. Hard to say without trying it.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance