Re: [PERFORM] Filesystem and Disk Partitioning for New Server Setup

2016-02-24 Thread Wes Vaske (wvaske)
FYI, If your volume for pg data is the last partition, you can always add 
drives to the Dell PERC RAID group, extend the volume, then extend the 
partition and extend the filesystem.

All of this can also be done live.

Wes Vaske


From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Rick Otten
Sent: Wednesday, February 24, 2016 9:06 AM
To: Dave Stibrany
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Filesystem and Disk Partitioning for New Server Setup

An LVM gives you more options.

Without an LVM you would add a disk to the system, create a tablespace, and 
then move some of your tables over to the new disk.  Or, you'd take a full 
backup, rebuild your file system, and then restore from backup onto the newer, 
larger disk configuration.  Or you'd make softlinks to pg_log or pg_xlog or 
something to stick the extra disk in your system somehow.

You can do that with an LVM too.  However, with an LVM you can add the disk to 
the system, extend the file system, and just keep running.  Live.  No need to 
figure out which tables or files should go where.

Sometimes it is really nice to have that option.




On Wed, Feb 24, 2016 at 9:25 AM, Dave Stibrany 
> wrote:
Thanks for the advice, Rick.

I have an 8 disk chassis, so possible extension paths down the line are adding 
raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10. Would LVM 
make this type of addition easier?


On Wed, Feb 24, 2016 at 6:08 AM, Rick Otten 
> wrote:

1) I'd go with xfs.  zfs might be a good alternative, but the last time I tried 
it, it was really unstable (on Linux).  I may have gotten a lot better, but xfs 
is a safe bet and well understood.

2) An LVM is just an extra couple of commands.  These days that is not a lot of 
complexity given what you gain. The main advantage is that you can extend or 
grow the file system on the fly.  Over the life of the database it is quite 
possible you'll find yourself pressed for disk space - either to drop in more 
csv files to load with the 'copy' command, to store more logs (because you need 
to turn up logging verbosity, etc...), you need more transaction logs live on 
the system, you need to take a quick database dump, or simply you collect more 
data than you expected.  It is not always convenient to change the log 
location, or move tablespaces around to make room.  In the cloud you might 
provision more volumes and attach them to the server.  On a SAN you might 
attach more disk, and with a stand alone server, you might stick more disks on 
the server.  In all those scenarios, being able to simply merge them into your 
existing volume can be really handy.

3) The main advantage of partitioning a single volume (these days) is simply 
that if one partition fills up, it doesn't impact the rest of the system.  
Putting things that are likely to fill up the disk on their own partition is 
generally a good practice.   User home directories is one example.  System 
logs.  That sort of thing.  Isolating them on their own partition will improve 
the long term reliability of your database.   The main disadvantage is those 
things get boxed into a much smaller amount of space than they would normally 
have if they could share a partition with the whole system.


On Tue, Feb 23, 2016 at 11:28 PM, dstibrany 
> wrote:
I'm about to install a new production server and wanted some advice regarding
filesystems and disk partitioning.

The server is:
- Dell PowerEdge R430
- 1 x Intel Xeon E5-2620 2.4GHz
- 32 GB RAM
- 4 x 600GB 10k SAS
- PERC H730P Raid Controller with 2GB cache

The drives will be set up in one RAID-10 volume and I'll be installing
Ubuntu 14.04 LTS as the OS. The server will be dedicated to running
PostgreSQL.

I'm trying to decide:

1) Which filesystem to use (most people seem to suggest xfs).
2) Whether to use LVM (I'm leaning against it because it seems like it adds
additional complexity).
3) How to partition the volume. Should I just create one partition on / and
create a 16-32GB swap partition? Any reason to get fancy with additional
partitions given it's all on one volume?

I'd like to keep things simple to start, but not shoot myself in the foot at
the same time.

Thanks!

Dave



--
View this message in context: 
http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


--
Sent via pgsql-performance mailing list 
(pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance





--
THIS IS A TEST



Re: [PERFORM] Filesystem and Disk Partitioning for New Server Setup

2016-02-24 Thread Rick Otten
An LVM gives you more options.

Without an LVM you would add a disk to the system, create a tablespace, and
then move some of your tables over to the new disk.  Or, you'd take a full
backup, rebuild your file system, and then restore from backup onto the
newer, larger disk configuration.  Or you'd make softlinks to pg_log or
pg_xlog or something to stick the extra disk in your system somehow.

You can do that with an LVM too.  However, with an LVM you can add the disk
to the system, extend the file system, and just keep running.  Live.  No
need to figure out which tables or files should go where.

Sometimes it is really nice to have that option.




On Wed, Feb 24, 2016 at 9:25 AM, Dave Stibrany  wrote:

> Thanks for the advice, Rick.
>
> I have an 8 disk chassis, so possible extension paths down the line are
> adding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10.
> Would LVM make this type of addition easier?
>
>
> On Wed, Feb 24, 2016 at 6:08 AM, Rick Otten 
> wrote:
>
>>
>> 1) I'd go with xfs.  zfs might be a good alternative, but the last time I
>> tried it, it was really unstable (on Linux).  I may have gotten a lot
>> better, but xfs is a safe bet and well understood.
>>
>> 2) An LVM is just an extra couple of commands.  These days that is not a
>> lot of complexity given what you gain. The main advantage is that you can
>> extend or grow the file system on the fly.  Over the life of the database
>> it is quite possible you'll find yourself pressed for disk space - either
>> to drop in more csv files to load with the 'copy' command, to store more
>> logs (because you need to turn up logging verbosity, etc...), you need more
>> transaction logs live on the system, you need to take a quick database
>> dump, or simply you collect more data than you expected.  It is not always
>> convenient to change the log location, or move tablespaces around to make
>> room.  In the cloud you might provision more volumes and attach them to the
>> server.  On a SAN you might attach more disk, and with a stand alone
>> server, you might stick more disks on the server.  In all those scenarios,
>> being able to simply merge them into your existing volume can be really
>> handy.
>>
>> 3) The main advantage of partitioning a single volume (these days) is
>> simply that if one partition fills up, it doesn't impact the rest of the
>> system.  Putting things that are likely to fill up the disk on their own
>> partition is generally a good practice.   User home directories is one
>> example.  System logs.  That sort of thing.  Isolating them on their own
>> partition will improve the long term reliability of your database.   The
>> main disadvantage is those things get boxed into a much smaller amount of
>> space than they would normally have if they could share a partition with
>> the whole system.
>>
>>
>> On Tue, Feb 23, 2016 at 11:28 PM, dstibrany  wrote:
>>
>>> I'm about to install a new production server and wanted some advice
>>> regarding
>>> filesystems and disk partitioning.
>>>
>>> The server is:
>>> - Dell PowerEdge R430
>>> - 1 x Intel Xeon E5-2620 2.4GHz
>>> - 32 GB RAM
>>> - 4 x 600GB 10k SAS
>>> - PERC H730P Raid Controller with 2GB cache
>>>
>>> The drives will be set up in one RAID-10 volume and I'll be installing
>>> Ubuntu 14.04 LTS as the OS. The server will be dedicated to running
>>> PostgreSQL.
>>>
>>> I'm trying to decide:
>>>
>>> 1) Which filesystem to use (most people seem to suggest xfs).
>>> 2) Whether to use LVM (I'm leaning against it because it seems like it
>>> adds
>>> additional complexity).
>>> 3) How to partition the volume. Should I just create one partition on /
>>> and
>>> create a 16-32GB swap partition? Any reason to get fancy with additional
>>> partitions given it's all on one volume?
>>>
>>> I'd like to keep things simple to start, but not shoot myself in the
>>> foot at
>>> the same time.
>>>
>>> Thanks!
>>>
>>> Dave
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html
>>> Sent from the PostgreSQL - performance mailing list archive at
>>> Nabble.com.
>>>
>>>
>>> --
>>> Sent via pgsql-performance mailing list (
>>> pgsql-performance@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-performance
>>>
>>
>>
>>
>
>
> --
> *THIS IS A TEST*
>


Re: [PERFORM] Filesystem and Disk Partitioning for New Server Setup

2016-02-24 Thread Dave Stibrany
Thanks for the advice, Rick.

I have an 8 disk chassis, so possible extension paths down the line are
adding raid1 for WALs, adding another RAID10, or creating a 8 disk RAID10.
Would LVM make this type of addition easier?


On Wed, Feb 24, 2016 at 6:08 AM, Rick Otten 
wrote:

>
> 1) I'd go with xfs.  zfs might be a good alternative, but the last time I
> tried it, it was really unstable (on Linux).  I may have gotten a lot
> better, but xfs is a safe bet and well understood.
>
> 2) An LVM is just an extra couple of commands.  These days that is not a
> lot of complexity given what you gain. The main advantage is that you can
> extend or grow the file system on the fly.  Over the life of the database
> it is quite possible you'll find yourself pressed for disk space - either
> to drop in more csv files to load with the 'copy' command, to store more
> logs (because you need to turn up logging verbosity, etc...), you need more
> transaction logs live on the system, you need to take a quick database
> dump, or simply you collect more data than you expected.  It is not always
> convenient to change the log location, or move tablespaces around to make
> room.  In the cloud you might provision more volumes and attach them to the
> server.  On a SAN you might attach more disk, and with a stand alone
> server, you might stick more disks on the server.  In all those scenarios,
> being able to simply merge them into your existing volume can be really
> handy.
>
> 3) The main advantage of partitioning a single volume (these days) is
> simply that if one partition fills up, it doesn't impact the rest of the
> system.  Putting things that are likely to fill up the disk on their own
> partition is generally a good practice.   User home directories is one
> example.  System logs.  That sort of thing.  Isolating them on their own
> partition will improve the long term reliability of your database.   The
> main disadvantage is those things get boxed into a much smaller amount of
> space than they would normally have if they could share a partition with
> the whole system.
>
>
> On Tue, Feb 23, 2016 at 11:28 PM, dstibrany  wrote:
>
>> I'm about to install a new production server and wanted some advice
>> regarding
>> filesystems and disk partitioning.
>>
>> The server is:
>> - Dell PowerEdge R430
>> - 1 x Intel Xeon E5-2620 2.4GHz
>> - 32 GB RAM
>> - 4 x 600GB 10k SAS
>> - PERC H730P Raid Controller with 2GB cache
>>
>> The drives will be set up in one RAID-10 volume and I'll be installing
>> Ubuntu 14.04 LTS as the OS. The server will be dedicated to running
>> PostgreSQL.
>>
>> I'm trying to decide:
>>
>> 1) Which filesystem to use (most people seem to suggest xfs).
>> 2) Whether to use LVM (I'm leaning against it because it seems like it
>> adds
>> additional complexity).
>> 3) How to partition the volume. Should I just create one partition on /
>> and
>> create a 16-32GB swap partition? Any reason to get fancy with additional
>> partitions given it's all on one volume?
>>
>> I'd like to keep things simple to start, but not shoot myself in the foot
>> at
>> the same time.
>>
>> Thanks!
>>
>> Dave
>>
>>
>>
>> --
>> View this message in context:
>> http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html
>> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
>>
>>
>> --
>> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
>> )
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-performance
>>
>
>
>


-- 
*THIS IS A TEST*


Fwd: [PERFORM] Filesystem and Disk Partitioning for New Server Setup

2016-02-24 Thread Rick Otten
1) I'd go with xfs.  zfs might be a good alternative, but the last time I
tried it, it was really unstable (on Linux).  I may have gotten a lot
better, but xfs is a safe bet and well understood.

2) An LVM is just an extra couple of commands.  These days that is not a
lot of complexity given what you gain. The main advantage is that you can
extend or grow the file system on the fly.  Over the life of the database
it is quite possible you'll find yourself pressed for disk space - either
to drop in more csv files to load with the 'copy' command, to store more
logs (because you need to turn up logging verbosity, etc...), you need more
transaction logs live on the system, you need to take a quick database
dump, or simply you collect more data than you expected.  It is not always
convenient to change the log location, or move tablespaces around to make
room.  In the cloud you might provision more volumes and attach them to the
server.  On a SAN you might attach more disk, and with a stand alone
server, you might stick more disks on the server.  In all those scenarios,
being able to simply merge them into your existing volume can be really
handy.

3) The main advantage of partitioning a single volume (these days) is
simply that if one partition fills up, it doesn't impact the rest of the
system.  Putting things that are likely to fill up the disk on their own
partition is generally a good practice.   User home directories is one
example.  System logs.  That sort of thing.  Isolating them on their own
partition will improve the long term reliability of your database.   The
main disadvantage is those things get boxed into a much smaller amount of
space than they would normally have if they could share a partition with
the whole system.


On Tue, Feb 23, 2016 at 11:28 PM, dstibrany  wrote:

> I'm about to install a new production server and wanted some advice
> regarding
> filesystems and disk partitioning.
>
> The server is:
> - Dell PowerEdge R430
> - 1 x Intel Xeon E5-2620 2.4GHz
> - 32 GB RAM
> - 4 x 600GB 10k SAS
> - PERC H730P Raid Controller with 2GB cache
>
> The drives will be set up in one RAID-10 volume and I'll be installing
> Ubuntu 14.04 LTS as the OS. The server will be dedicated to running
> PostgreSQL.
>
> I'm trying to decide:
>
> 1) Which filesystem to use (most people seem to suggest xfs).
> 2) Whether to use LVM (I'm leaning against it because it seems like it adds
> additional complexity).
> 3) How to partition the volume. Should I just create one partition on / and
> create a 16-32GB swap partition? Any reason to get fancy with additional
> partitions given it's all on one volume?
>
> I'd like to keep things simple to start, but not shoot myself in the foot
> at
> the same time.
>
> Thanks!
>
> Dave
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Filesystem-and-Disk-Partitioning-for-New-Server-Setup-tp5889074.html
> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


Fwd: [PERFORM] Cloud versus buying my own iron

2016-02-24 Thread Rick Otten
Having gotten used to using cloud servers over the past few years, but been
a server hugger for more than 20 before that, I have to say the cloud
offers a number of huge advantages that would make me seriously question
whether there are very many good reasons to go back to using local iron at
all.  (Other than maybe running databases on your laptop for development
and testing purposes.)

Rackspace offers 'bare metal' servers if you want consistent performance.
You don't have to pay for a managed solution, there are a lot of tiers of
service.  AWS also offers solutions that are not on shared platforms.  (AWS
tends to be much more expensive and, in spite of the myriad [proprietary]
industry leading new features, actually a little less flexible and with
poorer support.)

The main advantage of cloud is the ability to be agile.  You can upsize,
downsize, add storage, move data centers, and adapt to changing business
requirements on the fly.   Even with overnight shipping and a minimal
bureaucracy - selecting new hardware, getting approval to purchase it,
ordering it, unboxing it, setting it up and testing it, and then finally
getting to installing software - can take days or weeks of your time and
energy.  In the cloud, you just click a couple of buttons and then get on
with doing the stuff that really adds value to your business.

I spent the better part of a couple of decades ordering servers and disks
and extra cpu boards for big and small companies and getting them in the
servers and provisioning them.   Now that I use the cloud I just reach over
with my mouse, provision an volume, attach it to the server, and voila -
I've averted a disk space issue.   I take an image, build a new server,
swing DNS, and - there you have it - I'm now on a 16 cpu system instead of
an 8 cpu system.  Hours, at most, instead of weeks.   I can spend my time
worrying about business problems and data science.

Every 6 months to a year both Rackspace and AWS offer new classes of
servers with new CPU's and faster backplanes and better performance for the
buck.  With only a little planning, you can jump into the latest hardware
every time they do so.  If you have your own iron, you are likely to be
stuck on the same hardware for 3 or more years before you can upgrade again.

If the platform you are on suffers a catastropic hardware failure, it
usually only takes a few minutes to bring up a new server on new hardware
and be back and running again.

Yes, there is a premium for the flexibility and convenience.  Surprisingly
though, I think by the time you add in electricity and cooling and labor
and shipping and switches and racks and cabling, you may find that even
with their margin, their economy of scale actually offers a better total
real cost advantage.  (I've heard some arguments to the contrary, but I'm
not sure I believe them if the cloud infrastructure is well managed.)
 Throw in the instant deep technical support you can get from some place
like Rackspace when things go wrong, and I find few advantages to being a
server hugger any more.







On Wed, Feb 24, 2016 at 4:01 AM, Gunnar "Nick" Bluth <
gunnar.bluth.ext...@elster.de> wrote:

> Am 24.02.2016 um 06:06 schrieb Craig James:
> > At some point in the next year we're going to reconsider our hosting
> > environment, currently consisting of several medium-sized servers (2x4
> > CPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1
> > for WAL). We use barman to keep a hot standby and an archive.
> >
> > The last time we dug into this, we were initially excited, but our
> > excitement turned to disappointment when we calculated the real costs of
> > hosted services, and the constraints on performance and customizability.
> >
> > Due to the nature of our business, we need a system where we can install
> > plug-ins to Postgres. I expect that alone will limit our choices. In
> > addition to our Postgres database, we run a fairly ordinary Apache web
> site.
> >
> > There is constant chatter in this group about buying servers vs. the
> > various hosted services. Does anyone have any sort of summary comparison
> > of the various solutions out there? Or is it just a matter of
> > researching it myself and maybe doing some benchmarking and price
> > comparisons?
>
> For starters, did you see Josh Berkus' presentation on the topic?
>   https://www.youtube.com/watch?v=WV5P2DgxPoI
>
> I for myself would probably always go the "own iron" road, but alas!
> that's just the way I feel about control. And I'm kind of a Linux
> oldshot, so managing a (hosted root) server doesn't scare me off.
>
> OTOH, I do see the advantages of having things like monitoring, backup,
> HDD replacements etc. done for you. Which is essentially what you pay for.
>
> In essence, there's obviously no silver bullet ;-)
>
> Best regards,
> --
> Gunnar "Nick" Bluth
> DBA ELSTER
>
> Tel:   +49 911/991-4665
> Mobil: +49 172/8853339
>
>
> --
> Sent via pgsql-performance mailing list 

Re: [PERFORM] Cloud versus buying my own iron

2016-02-24 Thread Gunnar "Nick" Bluth
Am 24.02.2016 um 06:06 schrieb Craig James:
> At some point in the next year we're going to reconsider our hosting
> environment, currently consisting of several medium-sized servers (2x4
> CPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1
> for WAL). We use barman to keep a hot standby and an archive.
> 
> The last time we dug into this, we were initially excited, but our
> excitement turned to disappointment when we calculated the real costs of
> hosted services, and the constraints on performance and customizability.
> 
> Due to the nature of our business, we need a system where we can install
> plug-ins to Postgres. I expect that alone will limit our choices. In
> addition to our Postgres database, we run a fairly ordinary Apache web site.
> 
> There is constant chatter in this group about buying servers vs. the
> various hosted services. Does anyone have any sort of summary comparison
> of the various solutions out there? Or is it just a matter of
> researching it myself and maybe doing some benchmarking and price
> comparisons?

For starters, did you see Josh Berkus' presentation on the topic?
  https://www.youtube.com/watch?v=WV5P2DgxPoI

I for myself would probably always go the "own iron" road, but alas!
that's just the way I feel about control. And I'm kind of a Linux
oldshot, so managing a (hosted root) server doesn't scare me off.

OTOH, I do see the advantages of having things like monitoring, backup,
HDD replacements etc. done for you. Which is essentially what you pay for.

In essence, there's obviously no silver bullet ;-)

Best regards,
-- 
Gunnar "Nick" Bluth
DBA ELSTER

Tel:   +49 911/991-4665
Mobil: +49 172/8853339


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance