Actually, what's the point in putting logs to ssd? SSDs are good for random
access and logs are accessed sequentially. I'd put table spaces on ssd and
leave logs on hdd
30 лист. 2012 04:33, "Niels Kristian Schjødt"
напис.
> Hmm I'm getting suspicious here. Maybe my new great setup with the SSD's
Most modern SSD are much faster for fsync type operations than a
spinning disk - similar performance to spinning disk + writeback raid
controller + battery.
However as you mention, they are great at random IO too, so Niels, it
might be worth putting your postgres logs *and* data on the SSDs an
When I try your command sequence I end up with the contents of the new
pg_xlog owned by root. Postgres will not start:
PANIC: could not open file "pg_xlog/000100060080" (log file
6, segment 128): Permission denied
While this is fixable, I suspect you have managed to leave the xlo
Oh, yes. I don't imagine DB server without RAID+BBU :)
When there is no BBU, SSD can be handy.
But you know, SSD is worse in linear read/write than HDD.
Best regards, Vitalii Tymchyshyn
2012/11/30 Mark Kirkwood
> Most modern SSD are much faster for fsync type operations than a spinning
> disk
Actually, what's the point in putting logs to ssd? SSDs are good for random
access and logs are accessed sequentially. I'd put table spaces on ssd and
leave logs on hdd
30 лист. 2012 04:33, "Niels Kristian Schjødt"
напис.
Because SSD's are considered faster. Then you have to put the most p
SSDs are not faster for sequential IO as I know. That's why (with BBU or
synchronious_commit=off) I prefer to have logs on regular HDDs.
Best reag
2012/11/30 Willem Leenen
>
> Actually, what's the point in putting logs to ssd? SSDs are good for
> random access and logs are accessed sequential
Niels Kristian Schjødt wrote:
>> You said before that you were seeing high disk wait numbers. Now
>> it is zero accourding to your disk utilization graph. That
>> sounds like a change to me.
> Hehe, I'm sorry if it somehow was misleading, I just wrote "a lot
> of I/O" it was CPU I/O
>>> A lot of
Okay, So to understand this better before I go with that solution:
In theory what difference should it make to the performance, to have a pool in
front of the database, that all my workers and web servers connect to instead
of connecting directly? Where is the performance gain coming from in tha
On 11/30/2012 07:31 AM, Niels Kristian Schjødt wrote:
In theory what difference should it make to the performance, to have
a pool in front of the database, that all my workers and web servers
connect to instead of connecting directly? Where is the performance
gain coming from in that situation?
On 11/29/2012 08:32 PM, Niels Kristian Schjødt wrote:
If I do a "sudo iostat -k 1"
I get a lot of output like this:
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
sda 0.00 0.00 0.00 0 0
sdb 0.00 0.00
On 11/30/2012 02:37 AM, Vitalii Tymchyshyn wrote:
Actually, what's the point in putting logs to ssd? SSDs are good for
random access and logs are accessed sequentially.
While this is true, Niels' problem is that his regular HDs are getting
saturated. In that case, moving any activity off of t
Hello
We am running a web application on ubuntu 10.10 using postgres 8.4.3.
We are experiencing regular problems (each morning as the users come in)
which seem to be caused by deadlocks in the postgres database. I am seeing
messages like:
2012-11-30 10:24:36 GMT LOG: sending cancel to blocking
Den 30/11/2012 kl. 15.02 skrev Shaun Thomas :
> On 11/29/2012 08:32 PM, Niels Kristian Schjødt wrote:
>
>> If I do a "sudo iostat -k 1"
>> I get a lot of output like this:
>> Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
>> sda 0.00 0.00 0.00
On 11/30/2012 08:48 AM, Niels Kristian Schjødt wrote:
I forgot to run 'sudo mount -a' I feel so embarrassed now :-( - In
other words no the drive was not mounted to the /ssd dir.
Yeah, that'll get ya.
I still see a lot of CPU I/O when doing a lot of writes, so the
question is, what's next. S
Hmm very very interesting. Currently I run at "medium" load compared to the
very high loads in the night.
This is what the CPU I/O on new relic show:
https://rpm.newrelic.com/public/charts/8RnSOlWjfBy
And this is what iostat shows:
Linux 3.2.0-33-generic (master-db) 11/30/2012 _x86_64_
Bob Jolliffe writes:
> We am running a web application on ubuntu 10.10 using postgres 8.4.3.
Current release in that branch is 8.4.14. (By this time next week
it'll be 8.4.15.) You are missing a lot of bug fixes:
http://www.postgresql.org/docs/8.4/static/release.html
> Trying to interpret this
On 11/30/2012 09:44 AM, Niels Kristian Schjødt wrote:
Just a note on your iostat numbers. The first reading is actually just
a summary. You want the subsequent readings.
The pgsql_tmp dir is not changing at all it's constantly empty (a size
of 4.0K).
Good.
Filesystem 1K-blocksUsed A
On 30 November 2012 15:57, Tom Lane wrote:
> Bob Jolliffe writes:
> > We am running a web application on ubuntu 10.10 using postgres 8.4.3.
>
> Current release in that branch is 8.4.14. (By this time next week
> it'll be 8.4.15.) You are missing a lot of bug fixes:
> http://www.postgresql.org/
On Nov 30, 2012, at 8:06 AM, Shaun Thomas wrote:
> I say that because you mentioned you're using Ubuntu 12.04, and we were
> having some problems with PG on that platform. With shared_buffers over
> 4GB, it starts doing really weird things to the memory subsystem.
> Whatever it does causes the ker
On 29/11/2012 17:33, Merlin Moncure wrote:
Since we have some idle cpu% here we can probably eliminate pgbench as
a bottleneck by messing around with the -j switch. another thing we
want to test is the "-N" switch -- this doesn't update the tellers and
branches table which in high concurrency s
On 11/30/2012 01:57 PM, Ben Chobot wrote:
Hm, this sounds like something we should look into. Before we start
digging do you have more to share, or did you leave it with the "huh,
that's weird; this seems to fix it" solution?
We're still testing. We're still on the -31 kernel. We tried the -33
On Fri, Nov 30, 2012 at 02:01:45PM -0600, Shaun Thomas wrote:
> On 11/30/2012 01:57 PM, Ben Chobot wrote:
>
> >Hm, this sounds like something we should look into. Before we start
> >digging do you have more to share, or did you leave it with the "huh,
> >that's weird; this seems to fix it" solutio
On 11/30/2012 02:38 PM, Bruce Momjian wrote:
Or Debian. Not sure what would justify use of Ubuntu as a server,
except wanting to have the exact same OS as their personal computers.
Honestly not sure why we went that direction. I'm not in the sysadmin
group, though I do work with them pretty
On Fri, Nov 30, 2012 at 12:38 PM, Bruce Momjian wrote:
> Or Debian. Not sure what would justify use of Ubuntu as a server,
> except wanting to have the exact same OS as their personal computers.
We have switched from Debian to Ubuntu: there is definitely non-zero
value in the PPA hosting (althou
On 01/12/12 11:21, Daniel Farina wrote:
On Fri, Nov 30, 2012 at 12:38 PM, Bruce Momjian wrote:
Or Debian. Not sure what would justify use of Ubuntu as a server,
except wanting to have the exact same OS as their personal computers.
We have switched from Debian to Ubuntu: there is definitely n
Hmm - not strictly true as stated: 1 SSD will typically do 500MB/s
sequential read/write. 1 HDD will be lucky to get a 1/3 that.
We are looking at replacing 4 to 6 disk RAID10 arrays of HDD with a
RAID1 pair of SSD, as they perform about the same for sequential work
and vastly better at random
26 matches
Mail list logo