transaction. No passing connections by hand anywhere, everything should be
nicely thread-bound. Still, if not here, where could it go wrong?
I have seen two cases in my career where there was an evil box on the
network that corrupted the traffic.
The first was a very long time ago (in the
We're looking to deploy a bunch of new machines.
Our DB is fairly small and write-intensive. Most of the disk
traffic is PG WAL. Historically we've avoided
RAID controllers for various reasons, but this new deployment will be
done with them (also for various reasons ;)
We like to use
On 6/18/2011 1:22 AM, Greg Smith wrote:
That said, the card itself looks like plain old simple LSI MegaRAID.
Get the battery backup unit
Thanks. Dell's web site drives me insane, and it appears I can save 20%
or more by going white-box.
One thing I don't understand is why is the BBU option
On 11/2/2011 11:01 AM, Benjamin Smith wrote:
2) Intel X25E - good reputation, significantly slower than the Vertex3. We're
buying some to reduce downtime.
If you don't mind spending money, look at the new 710 Series from Intel.
Not SLC like the X25E, but still specified with a very high
On 11/4/2011 8:26 AM, Yeb Havinga wrote:
First, if your'e interested in doing a test like this yourself, I'm
testing on ubuntu 11.10, but even though this is a brand new
distribution, the smart database was a few months old.
'update-smart-drivedb' had as effect that the names of the values
On 1/23/2012 5:24 PM, David Johnston wrote:
Immediately upon starting the server I get an incomplete startup packet log message.
Just prior there is an autovacuum launcher started message.
We've found that this message is printed in the log if a client makes a
TCP connection to the PG
Long thread - figured may as well toss in some data:
We use CentOS 5 and 6 and install PG from the yum repository detailed on
the postgresql.org web site.
We've found that the PG shipped as part of the OS can never be trusted
for production use, so we don't care what version ships with the
On 3/3/2012 7:05 PM, Tom Lane wrote:
[ raised eyebrow... ] As the person responsible for the packaging
you're dissing, I'd be interested to know exactly why you feel that
the Red Hat/CentOS PG packages can never be trusted. Certainly they
tend to be from older release branches as a result of
We've used Raptors in production for a few years.
They've been about as reliable as you'd expect for hard
drives, with the additional excitement of a firmware bug
early on that led to data loss and considerable expense.
New machines deployed since November last year have 710 SSDs.
No problems
fwiw we run db_dump locally, compress the resulting file and scp or
rsync it to the remote server.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 8/21/2012 7:10 AM, Oliver Kohll - Mailing Lists wrote:
This is a general 'cloud or dedicated' question, I won't go into it
but I believe cloud proponents cite management ease, scalability etc.
I'm sure there's a place for every type of hosting. However I would be
interested in hearing
On 8/21/2012 2:18 AM, Vincent Veyron wrote:
I wonder : is there a reason why you have to go through the complexity
of such a setup, rather than simply use bare metal and get good
performance with simplicity?
In general I agree -- it is much (much!) cheaper to buy tin and deploy
yourself vs any
On 9/1/2012 6:42 AM, Edson Richter wrote:
Nevertheless, when we present our product to customers, they won't get
satisfied until we guarantee we can run same product with major paid
versions (Oracle, MS SQL, and so on).
I think this is a business problem not a technology problem. Forget
trying
I dunno, perhaps I don't get out the office enough, but I just don't
hear about MySQL any more.
I think this thread is tilting at windmills.
A few years ago about 1 in 2 contracts we had was with a start-up using
MySQL.
The other half were using either PG or Oracle or SQLServer. The years
On 5/10/2013 9:19 AM, Matt Brock wrote:
After googling this for a while, it seems that High Endurance MLC is only
starting to rival SLC for endurance and write performance in the very latest,
cutting-edge hardware. In general, though, it seems it would be fair to say
that SLCs are still a
On 5/10/2013 10:21 AM, Merlin Moncure wrote:
As it turns out the list of flash drives are suitable for database use
is surprisingly small. The s3700 I noted upthread seems to be
specifically built with databases in mind and is likely the best
choice for new deployments. The older Intel 320 is
On 5/10/2013 11:20 AM, Merlin Moncure wrote:
I find the s3700 to be superior to the 710 in just about every way
(although you're right -- it is suitable for database use). merlin
The s3700 series replaces the 710 so it should be superior :)
--
Sent via pgsql-general mailing list
On 5/10/2013 11:23 AM, Lonni J Friedman wrote:
There's also the 520 series, which has better performance than the 320
series (which is EOL now).
I wouldn't use the 520 series for production database storage -- it has
the Sandforce controller and apparently no power failure protection.
On 5/11/2013 3:10 AM, Matt Brock wrote:
On 10 May 2013, at 16:25, David Boreham david_l...@boreham.org wrote:
I've never looked at SLC drives in the past few years and don't know anyone who
uses them these days.
Because SLCs are still more expensive? Because MLCs are now almost as good
btw we deploy on CentOS6. The only things we change from the default are:
1. add relatime,discard options to the mount (check whether the most
recent CentOS6 does this itself -- it didn't back when we first deployed
on 6.0).
2. Disable swap. This isn't strictly an SSD tweak, since we have
On 5/12/2013 7:20 PM, John R Pierce wrote:
the real SLC drives end up OEM branded in large SAN systems, such as
sold by Netapp, EMC, and are made by companies like STEC that have
zero presence in the 'whitebox' resale markets like Newegg.
Agreed. I don't go near the likes of Simple,
On 5/19/2013 7:19 PM, Toby Corkindale wrote:
On 13/05/13 11:23, David Boreham wrote:
btw we deploy on CentOS6. The only things we change from the default
are:
1. add relatime,discard options to the mount (check whether the most
recent CentOS6 does this itself -- it didn't back when we first
I have a (large) corrupted 8.3.7 database that I'd like to fix.
It has this problem :
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: missing chunk number 2 for toast
value 10114 in pg_toast_16426
I've seen this particular syndrome before and fixed it by deleting
Howard Cole wrote:
Howard Cole wrote:
Thanks Francisco - I currently have MinPoolSize set to 3 (I have a
lot of databases on this cluster), I think this copes 90% of the time
but I shall set it to 10 and see what happens.
Sampling the number of connections on my database I decided that the
Scott Marlowe wrote:
I would recommend using a traffic shaping router (like the one built
into the linux kernel and controlled by tc / iptables) to simulate a
long distance connection and testing this yourself to see which
replication engine will work best for you.
Netem :
Lincoln Yeoh wrote:
It seems you currently can only control outbound traffic from an
interface, so you'd have to set stuff on both interfaces to shape
upstream and downstream - this is not so convenient in some network
topologies.
This is more a property of the universe than the software ;)
Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
Could some of you please share some info on such scenarios- where
you are supporting/designing/developing databases that run into at
least a few hundred GBs of data (I know, that is small by todays'
standards)?
At NuevaSync we use PG in
On 11/4/2010 9:00 AM, Michael Gould wrote:
What and why should I look at certain distributions? It appears from
what I read, Ubanta is a good desktop but not a server.
We use CentOS. I don't know of a good reason to look at other
distributions for a server today.
You may or may not see
On 11/9/2010 10:27 AM, Graham Leggett wrote:
This is covered by the GPL license. Once you have released code under
the GPL, all derivative code - ie upgrades - have to also be released
in source form, under the GPL license.
Sorry but this is 100% not true. It may be true for a 3rd party
In addition to the license a product is currently available under,
you need to also consider who owns its copyright; who owns
its test suite (which may not be open source at all); who
employs all the people who understand the code and who owns
the trademarks that identify the product.
Red Hat
On 11/9/2010 10:45 AM, Andy wrote:
As a condition of getting European Commission's approval of its acquisition of
Sun/MySQL, Oracle had to agree to continue the GPL release.
In case anyone is interested in what specifically Oracle agreed to do,
this is the text
from the decision (they agreed
On 11/9/2010 11:10 AM, Sandeep Srinivasa wrote:
It was about the technical discussion on Highscalability - I have been
trying to wrap my head around the concept of multiple core scaling for
Postgres, especially beyond 8 core (like Scott's Magny Coeurs
example). My doubt arises from whether
Also there's the strange and mysterious valley group-think syndrome.
I've seen this with several products/technologies over the years.
I suspect it comes from the VCs, but I'm not sure. The latest example
is you should be using EC2. There always follows a discussion where
I can present 50
On 11/9/2010 11:36 AM, Sandeep Srinivasa wrote:
If it is independent of the OS, then how does one go about tuning it.
Consider this - I get a 12 core server on which I want multiple
webserver instances + DB. Can one create CPU pools (say core 1,2,3 for
webservers, 4,5,6,7 for DB, etc.) ?
I
On 11/9/2010 5:05 PM, Scott Marlowe wrote:
Note that you're likely to get FAR more out of processor affinity with
multiple NICs assigned each to its own core / set of cores that share
L3 cache and such.Having the nics and maybe RAID controllers and /
or fibre channel cards etc on their own
On 11/13/2010 3:31 PM, LazyTrek wrote:
Do the long standing members not have problems with spam?
As you can see I use a list alias. However, in my experience the notion
that you can avoid spam by not frequenting mailing lists is quaint to
say the least. The spammers have had ways to find,
One thing to remember in this discussion about SSD longevity is that the
underlying value of interest is the total number of erase cycles, per
block, on the flash devices. Vendors quote lifetime as a number of
bytes, but this is calculated using an assumed write amplification
factor. That
On 4/28/2011 8:20 AM, Scott Ribe wrote:
I don't think you can simply say that I am writing so many Gb WAL files,
therefore according to the vendor's spec
Also, I fully expect the vendors lie about erase cycles as baldly as they lie
about MTBF, so I would divide by a very healthy skepticism
On 5/4/2011 11:15 AM, Scott Ribe wrote:
Sigh... Step 2: paste link in ;-)
http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html
To be honest, like the article author, I'd be happy with 300+ days to
failure, IF the drives provide an accurate predictor of
No problem with that, for a first step. ***BUT*** the failures in this article
and
many others I've read about are not in high-write db workloads, so they're not
write wear,
they're just crappy electronics failing.
As a (lapsed) electronics design engineer, I'm suspicious of the notion that
On 5/4/2011 6:02 PM, Greg Smith wrote:
On 05/04/2011 03:24 PM, David Boreham wrote:
So if someone says that SSDs have failed, I'll assume that they
suffered from Flash cell
wear-out unless there is compelling proof to the contrary.
I've been involved in four recovery situations similar
On 5/4/2011 9:06 PM, Scott Marlowe wrote:
Most of it is. But certain parts are fairly new, i.e. the
controllers. It is quite possible that all these various failing
drives share some long term ~ 1 year degradation issue like the 6Gb/s
SAS ports on the early sandybridge Intel CPUs. If that's
On 5/5/2011 2:36 AM, Florian Weimer wrote:
I'm a bit concerned with usage-dependent failures. Presumably, two SDDs
in a RAID-1 configuration are weared down in the same way, and it would
be rather inconvenient if they failed at the same point. With hard
disks, this doesn't seem to happen;
On 5/4/2011 11:50 PM, Toby Corkindale wrote:
In what way has the SMART read failed?
(I get the relevant values out successfully myself, and have Munin
graph them.)
Mis-parse :) It was my _attempts_ to read SMART that failed.
Specifically, I was able to read a table of numbers from the drive,
On 5/5/2011 8:04 AM, Scott Ribe wrote:
Actually, any of us who really tried could probably come up with a dozen
examples--more if we've been around for a while. Original design cutting
corners on power regulation; final manufacturers cutting corners on specs;
component manufacturers cutting
On 7/4/2010 8:10 AM, Andre Lopes wrote:
I need to use an Forum Software. There is any Open Souce Forum Script
using PostgreSQL?
We use jForum.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On 8/7/2010 4:24 AM, சிவகுமார் மா wrote:
4. A pet name
Is it possible to have a pet name which can be used in casual
conversation easily?
PG
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
About the shared buffers size configuration discussion:
Like a few others here, I've spent a sizable proportion of my career
dealing with this issue (not with PG, with other products I've developed
that had a similar in-memory page pool).
There are roughly six stages in understanding this
As far as I can see there is no pre-built pg_filedump binary for the
PDGD yum repository (8.3.11 for RHEL5). Before I embark on building it
from source I figured I'd ask here if I'm correct that there is no
binary hidden somewhere in the packages.
Thanks.
--
Sent via pgsql-general
On 9/27/2010 6:51 AM, Devrim GÜNDÜZ wrote:
They are ready:
http://yum.pgrpms.org/8.3/redhat/rhel-5-x86_64/repoview/pg_filedump.html
http://yum.pgrpms.org/8.3/redhat/rhel-5.0-i386/repoview/pg_filedump.html
Thanks !
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
Is the zero_damaged_pages feature expected to work in 8.3.11 ?
I have a fair bit of evidence that it doesn't (you get nice messages
in saying that the page is being zeroed, but the on-disk data does not
change).
I also see quite a few folk reporting similar findings in various form
and
On 9/27/2010 4:40 PM, Jeff Davis wrote:
It does zero the page in the buffer, but I don't think it marks it as
dirty. So, it never really makes it to disk as all-zeros.
Ah ha ! This is certainly consistent with the observed behavior.
zero_damaged_pages is not meant as a recovery tool. It's
On 9/27/2010 4:53 PM, Tom Lane wrote:
The reason it tells you that data will be destroyed is that that could
very well happen. If the system decides to put new data into what will
appear to it to be an empty page, then the damaged data on disk will be
overwritten, and then there's no hope of
On 9/27/2010 4:53 PM, Tom Lane wrote:
The reason it tells you that data will be destroyed is that that could
very well happen.
Re-parsing this, I think there was a mis-communication :
I'm not at all suggesting that the doc should _not_ say that data will
be corrupted.
I'm suggesting that in
On 10/11/2010 5:46 PM, Carlos Mennens wrote:
Just wondering how you guys feel about NoSQL and I just wanted to
share the following article...
http://www.linuxjournal.com/article/10770
Looking to read your feedback and / or opinions.
http://www.xtranormal.com/watch/6995033/
(warning: may not
On 11/7/2012 3:17 PM, Vick Khera wrote:
My most recent big box(es) are built using all Intel 3xx series
drives. Like you said, the 7xx series was way too expensive.
I have to raise my hand to say that for us 710 series drives are an
unbelievable bargain and we buy nothing else now for
On 11/8/2012 2:05 PM, Rodrigo Pereira da Silva wrote:
Hi Guys,
We are having a problem with our pgsql 9.1 on
Linux(Debian).
Suddently, the database stop working and the logs
shows the statements below just before the problem. Any thoughts?
Just a word of
On 11/12/2012 1:52 PM, Gunnar Nick Bluth wrote:
Am 12.11.2012 11:03, schrieb Ivan Voras:
Is anyone running PostgreSQL on a clustered file system on Linux? By
clustered I actually mean shared, such that the same storage is
mounted by different servers at the same time (of course, only one
On 12/10/2012 1:26 PM, Mihai Popa wrote:
Second, where should I deploy it? The cloud or a dedicated box?
Amazon seems like the sensible choice; you can scale it up and down as
needed and backup is handled automatically.
I was thinking of an x-large RDS instance with 1 IOPS and 1 TB of
On 12/11/2012 8:28 AM, Mihai Popa wrote:
I guess Chris was right, I have to better understand the usage pattern
and do some testing of my own.
I was just hoping my hunch about Amazon being the better alternative
would be confirmed, but this does not
seem to be the case; most of you recommend
On 12/11/2012 2:03 PM, Mihai Popa wrote:
I actually looked at Linode, but Amazon looked more competitive...
Checking Linode's web site just now it looks like they have removed
physical machines as an option.
Try SoftLayer instead for physical machines delivered on-demand :
I'm not sure the last time I saw this discussion, but I was somewhat
curious: what would be your ideal Linux distribution for a nice solid
PostgreSQL installation? We've kinda bounced back and forth between
RHEL, CentOS, and Ubuntu LTS, so I was wondering what everyone else
thought.
We run
First I need to say that I'm asking this question on behalf of a
friend, who asked me what I thought on the subject -- I host all the
databases important to me and my livelihood, on physical machines I own
outright. That said, I'm curious as to the current thinking on a)
whether it is wise,
I thanks very much for your detailed response. A few answers below inline:
On 4/7/2013 9:38 AM, Tomas Vondra wrote:
As for the performance, AFAIK the EBS volumes always had, and probably
will have, a 32 MB/s limit. Thanks to caching, built into the EBS, the
performance may seem much better
On 4/8/2013 3:15 AM, Vincent Veyron wrote:
Could someone explain to me the point of using an AWS instance in the
case of the OP, whose site is apparently very busy, versus renting a
bare metal server in a datacenter?
I am the OP, but I can't provide a complete answer, since personally
(e.g.
On 3/12/2014 9:34 AM, Adrian Klaver wrote:
Columbia the country or the District?
There's also the river...
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 3/12/2014 9:49 AM, alexandros_e wrote:
It seems like spam to me. Where is the guy's name or credentials? If I would
request something, I would sign with my name, government email and
telephone.
Or it is a 20-day-early April 1 email.
--
Sent via pgsql-general mailing list
While I have two friends who work at FusionIO, and have great confidence
in their products, we like to deploy more conventional SATA SSDs at
present in our servers. We have been running various versions of Intel's
enterprise and data center SSDs in production for several years now and
On 4/3/2014 2:00 PM, John R Pierce wrote:
an important thing in getting decent wear leveling life with SSDs is
to keep them under about 70% full.
This depends on the drive : drives with higher specified write endurance
already have significant overprovisioning, before the user sees the
It would be useful to know more details -- how much storage space you
need for example.
fwiw I considered all of these issues when we first deployed SSDs and
decided to not use RAID controllers.
There have not been any reasons to re-think that decision since.
However, it depends on your
On 4/4/2014 3:57 PM, Steve Crawford wrote:
Judicious archiving allows us to keep our total OS+data storage
requirements under 100GB. Usually. So we should be able to easily stay
in the $500/drive price range (200GB S3700) and still have plenty of
headroom for wear-leveling.
One option I'm
On 4/4/2014 5:29 PM, Lists wrote:
So, spend the money and get the enterprise class SSDs. They have come
down considerably in price over the last year or so. Although on paper
the Intel Enterprise SSDs tend to trail the performance numbers of the
leading consumer drives, they have wear
Hi Dmitriy, are you able to say a little about what's driving your quest
for async http-to-pg ?
I'm curious as to the motivations, and whether they match up with some
of my own reasons for wanting to use low-thread-count solutions.
Thanks.
--
Sent via pgsql-general mailing list
On 10/31/2014 7:31 AM, Merlin Moncure wrote:
Can anyone share any experiences with running PostgreSQL on a
tablet ? (Surface Pro 3, ASUS Transformer)
The SSD in a modern tablet is considerably faster than a hard drive in a
high-end server from the era when PG was originally developed so
74 matches
Mail list logo