I've found that pgtune doesn't quite provide the benefit that I would
like. It still uses large work mem and maintenance work mem numbers,
even though, up until now Postgres has an issue with large numbers of
tuples, so seems that smaller settings are better, in the 64MB type
range. (based on feedb
On Tue, Jun 14, 2016 at 2:03 PM, Jim Nasby wrote:
> On 4/19/16 11:01 PM, Tory M Blue wrote:
>>>>
>>>> >> Slon is also starting to not be viable as it takes some indexes over
>>>> >> 7
>>>> >> hours to complete. So this u
We are starting some testing in AWS, with EC2, EBS backed setups.
What I found interesting today, was a single EBS 1TB volume, gave me
something like 108MB/s throughput, however a RAID10 (4 250GB EBS
volumes), gave me something like 31MB/s (test after test after test).
I'm wondering what you folk
In line Jim
On Sun, Apr 3, 2016 at 10:13 AM, Jim Nasby wrote:
> On 3/24/16 12:43 PM, Tory M Blue wrote:
>>
>> Slon is also starting to not be viable as it takes some indexes over 7
>> hours to complete. So this upgrade path seemed to really be nice.
>
>
> If you
Thanks to all that responded
I successfully upgraded 800GB DB with pg_upgrade in about 2 hours.
This would have taken 2 days to dump/restore.
Slon is also starting to not be viable as it takes some indexes over 7
hours to complete. So this upgrade path seemed to really be nice.
Not sure how I ca
No hits on the intratubes on this.
Any idea ? We are doing some massive deletes so was curious as to what
would cause this error. The DB is up, not overburdened, just some big
deletes and slon replication processes.
CentOS 6.6
Postgres 9.4.5
First time I've ever seen this alert/error just curio
Howdy
Postgres9.2 going to 9.4
CentOS 6.5
So in most of my environments, I use slony and thus use slony replication
for my upgrades (Drop/add nodes etc).
But I've got a pretty big DB just shy of a TB that is on a single node. A
dump restore would take over 48 hours because of index creations et
On Thu, Dec 10, 2015 at 12:00 PM, Joshua D. Drake
wrote:
> On 12/10/2015 10:35 AM, Tory M Blue wrote:
>
>
>> Thiis valid regardless of the workload?
>>
>
> Yes.
>
>
> Seems that I would be storing a
>> ton of data and writing it once an hour, so would
On Thu, Dec 10, 2015 at 9:20 AM, Joshua D. Drake
wrote:
> On 12/10/2015 01:12 AM, Tory M Blue wrote:
>
> checkpoint_timeout = 5min
>>
>> checkpoint_completion_target = 0.9
>>
>>
> The above is your problem. Make checkpoint_timeout = 1h . Also,
> considering
9.3.4 CentOS 256Gb system
total_checkpoints | minutes_between_checkpoints
---+-
109943 | 0.0274886580556895
I've just bumped then to 150.
# - Checkpoints -
checkpoint_segments = 150
checkpoint_timeout = 5min
checkpoint_comp
Just trying to figure something out.
9.3.4, CentOS6.5
256GB Ram
Maintenance_work_mem = 125GB
Effective_Cache = 65GB
I have 2 indexes running, started at the same time, they are not small and
one will take 7 hours to complete.
I see almost zero disk access, very minor, not what I want to see whe
On Wed, Aug 26, 2015 at 2:45 PM, Qingqing Zhou
wrote:
> On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue wrote:
> >
> > Right now the 100% cpu process which is this index is only using 3.5GB
> > and has been for the last 15 hours
> >
>
> If 100% cpu, you can do
On Wed, Aug 26, 2015 at 12:36 PM, Igor Neyman
wrote:
>
>
>
>
> *From:* Tory M Blue [mailto:tmb...@gmail.com]
> *Sent:* Wednesday, August 26, 2015 3:26 PM
> *To:* Igor Neyman
> *Cc:* pgsql-performance
> *Subject:* Re: [PERFORM] Index creation running now for 14 hours
On Wed, Aug 26, 2015 at 12:18 PM, Igor Neyman
wrote:
>
>
>
>
> *From:* pgsql-performance-ow...@postgresql.org [mailto:
> pgsql-performance-ow...@postgresql.org] *On Behalf Of *Tory M Blue
> *Sent:* Wednesday, August 26, 2015 3:14 PM
> *To:* pgsql-performance
>
I'm running 9.3.4 with slon 2.2.3, I did a drop add last night at 9pm, it
started this particular tables index creation at 10:16pm and it's still
running. 1 single core is at 100% (32 core box) and there is almost zero
I/O activity.
CentOS 6.6
16398 | clsdb | 25765 | 10 | postgres | slon.
Greetings and salutations.
I've got some weirdness.
Current:
Postgres 9.3.4
Slony 2.2.3
CentOS 6.5
Prior running Postgres 9.1.2 w/slony 2.1.3 CentOS 6.2
I found that if I tried to run a vacuum full on 1 table that I recently
reindexed (out of possibly 8 tables) that I get this error:
# vacuum
log_temp_files (integer)
Controls logging of temporary file names and sizes. Temporary files can be
created for sorts, hashes, and temporary query results. A log entry is made
for each temporary file when it is deleted. A value of zero logs all
temporary file information
so I've set this to;
log_
Thanks Thomas,
>
> On 4.11.2014 21:07, Tory M Blue wrote:
> > Well after fighting this all day and dealing with a really sluggish db
> > where even my slon processes were taking several seconds, I reduced my
> > shared_buffers back to 2GB from 10GB and my work_mem
On Tue, Nov 4, 2014 at 9:01 AM, Tory M Blue wrote:
> Not sure what is going on but other than upgrading to 9.3.4 from 9.2.4,
> i'm seeing major slowness in basic queries and seeing a ton of the bind and
> parse in my logs. These are standard lookups and should take micro seconds.
Not sure what is going on but other than upgrading to 9.3.4 from 9.2.4, i'm
seeing major slowness in basic queries and seeing a ton of the bind and
parse in my logs. These are standard lookups and should take micro seconds.
I'm logging all queries that take over a second and this seems to be
gettin
Greetings all,
I'm trying to wrap my head around updating my configuration files, which
have been probably fairly static since before 8.4.
I've got some beefy hardware but have some tables that are over 57GB raw
and end up at 140GB size after indexes are applied. One index creation took
7 hours t
Hi
I am going to add a new column to a table for modify_date that needs to be
updated every time the table is updated. Is it better to just update
application code to set the modify_date to current_time, or create a
Before-Update trigger on the table that will update the modify_date column
to curre
So I recently had some corruption, that forced a rebuild. At that time i
upgraded to my current release of 9.2.4 (stable in production for a time
and I know it's not the latest greatest).
So reloaded data, started loading more, things were good, several reboots
etc no issues.
This morning the sys
Appears one of my bigger, but older DB's cored or other this morning and
when it came back up the DB shows that it can't start and is possibly
corrupted. I've read this was actually due to a kernel bug sometime back
(or at least tied to the kernel bug).
I'm wondering if there was any other work ar
On Fri, Feb 14, 2014 at 5:55 PM, Michael Paquier
wrote:
> On Sat, Feb 15, 2014 at 6:06 AM, Tory M Blue wrote:
> >
> >
> > I'm working with slon and the index portion for at least 3 of my tables
> take
> > hours to complete and thus with this instance of
I'm working with slon and the index portion for at least 3 of my tables
take hours to complete and thus with this instance of slony being a wide
area replica, sessions time out and slon fails to complete.
So I'm looking at dumping the schema without index information, install
that on the slon slav
On Wed, Apr 10, 2013 at 2:42 PM, Shaun Thomas wrote:
> On 04/10/2013 04:25 PM, Tory M Blue wrote:
>
> Is there an easy/clean way to adapt my old config file to the new stuff,
>> I'm not sure what all has changed, so wondering if I just have to go
>> line by line and
Afternoon
So I just realized I've been just reimporting me Postgres configs from one
version to the next, since they were initially customized for my setup.
Maybe from 7.x... And now on 9.2.4
Is there an easy/clean way to adapt my old config file to the new stuff,
I'm not sure what all has chang
On Mon, Apr 1, 2013 at 5:55 PM, Bruce Momjian wrote:
> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:
> > > Due to the security nature of the release, the source and binaries will
> > > only be publicly available on April 4 --- there are no pre-release
> > > versions availa
So It was announced that there would be a security patch for all versions
released on the 4th. I see it's been announced/released on the website, but
the versions available show Feb dates.
Should the source be current? Or does it take a while for source and other
to be made available?
Figured if
On Thu, Feb 14, 2013 at 3:08 AM, Heikki Linnakangas wrote:
> On 14.02.2013 12:49, Tory M Blue wrote:
>
>> My postgres db ran out of space. I have 27028 files in the pg_xlog
>> directory. I'm unclear what happened this has been running flawless for
>> years. I do hav
On Thu, Feb 14, 2013 at 3:01 AM, Ian Lawrence Barwick wrote:
> 2013/2/14 Tory M Blue
>
>> My postgres db ran out of space. I have 27028 files in the pg_xlog
>> directory. I'm unclear what happened this has been running flawless for
>> years. I do have archiving
My postgres db ran out of space. I have 27028 files in the pg_xlog
directory. I'm unclear what happened this has been running flawless for
years. I do have archiving turned on and run an archive command every 10
minutes.
I'm not sure how to go about cleaning this up, I got the DB back up, but
I've
Well thought it was maybe just going from 8.4.4 to 9.1.1 so upgraded
to 8.4.9 and tried pg_upgrade again (this is 64bit) and it's failing
-bash-4.0$ /tmp/pg_upgrade --check --old-datadir "/data/db"
--new-datadir "/data1/db" --old-bindir "/ipix/pgsql/bin" --new-bindir
"/ipix/pgsql9/bin"
Performing
>From my last report I had success but it was successful due to lots of
manual steps. I figured it may be safer to just create a new rpm,
installing to pgsql9 specific directories and a new data directory.
This allows pg_upgrade to complete successfully (so it says). However
my new data directory
On Mon, Dec 5, 2011 at 11:08 AM, Bruce Momjian wrote:
>
> If you look in a 9.0+ tablespace directory, you will see that each
> cluster has its own subdirectory:
>
> test=> create tablespace tb1 location '/u/pg/tb1';
> CREATE TABLESPACE
> test=> \q
> $ lf /u/pg/tb1
>
On Mon, Dec 5, 2011 at 10:31 AM, Tory M Blue wrote:
> On Mon, Dec 5, 2011 at 10:22 AM, Bruce Momjian wrote:
>>> But initial response to all this, is umm we have not really made a
>>> dump/restore unnecessary with the latest releases of Postgres than, as
>>> I would
On Mon, Dec 5, 2011 at 10:22 AM, Bruce Momjian wrote:
>> But initial response to all this, is umm we have not really made a
>> dump/restore unnecessary with the latest releases of Postgres than, as
>> I would have to think that there is a high percentage of users whom
>> use tablespaces.
>
> Yes,
On Mon, Dec 5, 2011 at 7:34 AM, Bruce Momjian wrote:
> Nicholson, Brad (Toronto, ON, CA) wrote:
>>
>> Based on the OP this does not seem like a messed up configuration. It
>> sounds like the OP used a fully supported core feature of Postgres
>> (tablespaces) and pg_upgrade failed as a result. I
On Sat, Dec 3, 2011 at 6:04 AM, Bruce Momjian wrote:
> Well, I am not totally clear how you are moving things around, but I do
> know pg_upgrade isn't happy to have the old and new cluster be very
> different.
>
> What I think is happening is that you didn't properly move the
> tablespace in the
So we are making progress on our performance issues, we are splitting
the data, changing the index value etc. So far having some success,
but we also want to test out some of the options and changes in the 9
branch, but trying to dump and restore 750gb of data is not all that
fun, so I'm trying to
On Wed, Nov 16, 2011 at 9:19 PM, Josh Berkus wrote:
> Tory,
>
> A seq scan across 83GB in 4 minutes is pretty good. That's over
> 300MB/s. Even if you assume that 1/3 of the table was already cached,
> that's still over 240mb/s. Good disk array.
>
> Either you need an index, or you need to not
On Wed, Nov 16, 2011 at 7:47 PM, Tomas Vondra wrote:
> On 17 Listopad 2011, 4:16, Tory M Blue wrote:
>> On Wed, Nov 16, 2011 at 7:02 PM, Scott Marlowe
>> wrote:
>>> On Wed, Nov 16, 2011 at 7:42 PM, Tory M Blue wrote:
>>>> On Wed, Nov 16, 2011 at 6:27
On Wed, Nov 16, 2011 at 7:02 PM, Scott Marlowe wrote:
> On Wed, Nov 16, 2011 at 7:42 PM, Tory M Blue wrote:
>> On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra wrote:
>>> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:
>>>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas
On Wed, Nov 16, 2011 at 6:27 PM, Tomas Vondra wrote:
> On 17 Listopad 2011, 2:57, Scott Marlowe wrote:
>> On Wed, Nov 16, 2011 at 4:59 PM, Tomas Vondra wrote:
>>
>>> But you're right - you're not bound by I/O (although I don't know what
>>> are
>>> those 15% - iowait, util or what?). The COUNT(DI
Thanks all, I misspoke on our use of the index.
We do have an index on log_date and it is being used here is the
explain analyze plan.
'Aggregate (cost=7266186.16..7266186.17 rows=1 width=8) (actual
time=127575.030..127575.030 rows=1 loops=1)'
' -> Bitmap Heap Scan on userstats (cost=13518
Linux F12 64bit
Postgres 8.4.4
16 proc / 32GB
8 disk 15KRPM SAS/Raid 5 (I know!)
shared_buffers = 6000MB
#temp_buffers = 8MB
max_prepared_transactions = 0
work_mem = 250MB
maintenance_work_mem = 1000MB
Working on some optimization as well as finally getting off my
backside and moving us to 64bit (32gb+memory).
I was reading and at some point it appears on freeBSD the Postgres
block size was upped to 16kb, from 8kb. And on my fedora systems I
believe the default build is 8kb.
When we were using
AMD Opteron(tm) Processor 4174 HE vs Intel(R) Xeon(R) CPUE5345 @ 2.33GHz
I'm wondering if there is a performance difference running postgres on
fedora on AMD vs Intel (the 2 listed above).
I have an 8 way Intel Xeon box and a 12way AMD box and was thinking
about migrating to the new AMD b
On Sat, Apr 23, 2011 at 12:24 PM, Robert Haas wrote:
> One thing to watch is the size of the filesystem cache. Generally as the
> system comes under memory pressure you will see the cache shrink. Not sure
> what is happening on your system, but typically when it gets down to some
> minimal siz
On Fri, Apr 22, 2011 at 9:45 AM, Cédric Villemain
wrote:
>> CommitLimit: 4128760 kB
>> Committed_AS: 2380408 kB
>
> Are you sure it is a PAE kernel ? You look limited to 4GB.
Figured that the Commitlimit is actually the size of swap, so on one
server it's 4gb and the other it's 5gb.
So s
On Fri, Apr 22, 2011 at 11:15 AM, David Rees wrote:
> On Thu, Apr 21, 2011 at 1:28 AM, Tory M Blue wrote:
>> this is a Fedora 12 system, 2.6.32.23-170. I've been reading and
>> appears this is yet another fedora bug, but so far I have not found
>> any concrete evidenc
On Fri, Apr 22, 2011 at 9:46 AM, Cédric Villemain
wrote:
> 2011/4/22 Cédric Villemain :
>> Are you sure it is a PAE kernel ? You look limited to 4GB.
>>
>> I don't know atm if overcommit_ratio=0 has a special meaning, else I
>> would suggest to update it to something like 40% (the default), but
On Fri, Apr 22, 2011 at 9:34 AM, Kevin Grittner
wrote:
> Tory M Blue wrote:
>
>> I appreciate the totally no postgres responses with this.
>
> I didn't understand that. What do you mean?
>
> -Kevin
I meant that when starting to talk about kernel commit limits/
On Fri, Apr 22, 2011 at 4:03 AM, Cédric Villemain
wrote:
> 2011/4/21 Tory M Blue :
>> On Thu, Apr 21, 2011 at 7:27 AM, Merlin Moncure wrote:
>>> On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue wrote:
>>
>>>> Fedora 12
>>>> 32gig memory, 8 proc
>
On Thu, Apr 21, 2011 at 1:04 PM, Scott Marlowe wrote:
> On Thu, Apr 21, 2011 at 11:15 AM, Tory M Blue wrote:
>
>> While I don't mind the occasional slap of reality. This configuration
>> has run for 4+ years. It's possible that as many other components each
>>
On Thu, Apr 21, 2011 at 8:57 AM, Claudio Freire wrote:
> On Thu, Apr 21, 2011 at 5:50 PM, Tory M Blue wrote:
>> # - Checkpoints -
>> checkpoint_segments = 100
>> max_connections = 300
>> shared_buffers = 2500MB # min 128kB or max_connections*16kB
>&
On Thu, Apr 21, 2011 at 5:53 AM, Claudio Freire wrote:
> On Thu, Apr 21, 2011 at 2:48 PM, Stephen Frost wrote:
>>
>> There's probably something else that's trying to grab all the memory and
>> then tries to use it and PG ends up getting nailed because the kernel
>> over-attributes memory to it.
On Thu, Apr 21, 2011 at 7:27 AM, Merlin Moncure wrote:
> On Thu, Apr 21, 2011 at 3:28 AM, Tory M Blue wrote:
>> Fedora 12
>> 32gig memory, 8 proc
>> postgres 8.4.4, slony 1.20
>> 5 gigs of swap (never hit it!)
>
> curious: using 32/64 bit postgres? what
Is there anyone that could help me understand why all of a sudden with
no noticeable change in data, no change in hardware, no change in OS,
I'm seeing postmaster getting killed by oom_killer?
The dmesg shows that swap has not been touched free and total are the
same, so this system is not running
On Tue, Sep 28, 2010 at 3:14 PM, Guy Rouillier wrote:
> On 9/28/2010 4:45 PM, Greg Smith wrote:
>>
>> Tory M Blue wrote:
>>>
>>> I'm doing an OS upgrade and have been sitting on 8.4.3 for sometime. I
>>> was wondering if it's better for the shor
I'm doing an OS upgrade and have been sitting on 8.4.3 for sometime. I
was wondering if it's better for the short term just to bring things
to 8.4.4 and let 9.0 bake a bit longer, or are people with large data
sets running 9.0 in production already?
Just looking for 9.0 feedback (understand it's s
On Fri, Feb 26, 2010 at 11:49 AM, Jorge Montero
wrote:
>
>
>>>> Tory M Blue 02/26/10 12:52 PM >>>
>>>
>>> This is too much. Since you have 300 connections, you will probably swap
>>> because of this setting, since each connection may use t
2010/2/25 Devrim GÜNDÜZ :
> On Thu, 2010-02-25 at 23:01 -0800, Tory M Blue wrote:
>
>> Checkpoint_timeout is the default and that looks like 5 mins (300
>> seconds). And is obviously why I have such a discrepancy between time
>> reached and requested.
>
> If you hav
2010/2/25 Devrim GÜNDÜZ :
> On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:
>> shared_buffers = 1500MB
>
> Some people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),
> if needed. Please note that more shared_buffers will lead to more
> pressure on bgwriter,
On Fri, Feb 26, 2010 at 5:09 AM, Kevin Grittner
wrote:
> Tory M Blue wrote:
>
>> 2010-02-25 22:53:13 PST LOG: checkpoint starting: time
>> 2010-02-25 22:53:17 PST postgres postgres [local] LOG: unexpected
>> EOF on client connection
>> 2010-02-25 22:55:43 PST L
2010/2/25 Tory M Blue :
> 2010/2/25 Devrim GÜNDÜZ :
>> On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:
>>> shared_buffers = 1500MB
>>
>> Some people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),
>> if needed. Please note that more shared_buff
2010/2/25 Devrim GÜNDÜZ :
> On Thu, 2010-02-25 at 22:12 -0800, Tory M Blue wrote:
>> shared_buffers = 1500MB
>
> Some people tend to increase this to 2.2GB(32-bit) or 4-6 GB (64 bit),
> if needed. Please note that more shared_buffers will lead to more
> pressure on bgwriter,
On Thu, Feb 25, 2010 at 10:12 PM, Tory M Blue wrote:
> Okay ladies and gents and the rest of you :)
>
> It's time I dig into another issue, and that's a curious 5 second
> delay on connect, on occasion. Now, I believe the network to be sound
> and there are zero errors o
Okay ladies and gents and the rest of you :)
It's time I dig into another issue, and that's a curious 5 second
delay on connect, on occasion. Now, I believe the network to be sound
and there are zero errors on any servers, no retrans, no runts, nada
nada nada. However I will continue to run tests
On Thu, Feb 4, 2010 at 10:43 AM, Craig James wrote:
> Tory M Blue wrote:
>>
>> I have a column that is a bigint that needs to store integers up to 19
>> digits long. For the most part this works but we sometimes have
>> numbers that are greater than 922337203685477580
Greetings,
I have a column that is a bigint that needs to store integers up to 19
digits long. For the most part this works but we sometimes have
numbers that are greater than 9223372036854775807.
I was thinking of changing this to a real or double precision field,
but read in the docs that the v
On Mon, Jan 25, 2010 at 3:59 AM, Matthew Wakeling wrote:
> On Mon, 25 Jan 2010, Richard Huxton wrote:
>>
>> OK - so the first query processes 19,799 rows in 31,219 ms (about 1.5ms
>> per row)
>>
>> The second processes 2,606 rows in 3,813 ms (about 1.3ms per row).
>
> Agreed. One query is faster t
On Fri, Jan 22, 2010 at 11:06 AM, Tory M Blue wrote:
> On Fri, Jan 22, 2010 at 10:26 AM, Matthew Wakeling
> wrote:
>>
>> On Fri, 22 Jan 2010, Tory M Blue wrote:
>>>
>>> But the same sql that returns maybe 500 rows is pretty fast, it's the return
>&g
On Fri, Jan 22, 2010 at 10:26 AM, Matthew Wakeling wrote:
>
> On Fri, 22 Jan 2010, Tory M Blue wrote:
>>
>> But the same sql that returns maybe 500 rows is pretty fast, it's the return
>> of 10K+ rows that seems to stall and is CPU Bound.
>
> Okay, so you ha
On Fri, Jan 22, 2010 at 1:42 AM, Richard Huxton wrote:
> On 21/01/10 22:15, Tory M Blue wrote:
>
>> · Data distribution = In the 98mill records, there are 7000 unique
>>
>> makeid's, and 21mill unique UID's. About 41mill of the records have
>> tag
On Thu, Jan 21, 2010 at 7:46 PM, Craig Ringer
wrote:
>
> > Any assistance would be appreciated, don't worry about slapping me
> > around I need to figure this out. Otherwise I'm buying new hardware
> > where it may not be required.
>
> What is the reporting query that takes 26 hours? You didn't se
The issues we are seeing besides just saying the reports take over 26 hours,
is that the query seems to be CPU bound. Meaning that the query consumes an
entire CPU and quite often it is sitting with 65%-90% WAIT. Now this is not
iowait, the disks are fine, 5000-6000tps, 700K reads etc with maybe 10
> Torsten Zühlsdorff wrote:
>
> >
> > I've upgraded all my databases to 8.4. They pain was not so big, the
> > new -j Parameter from pg_restore is fantastic. I really like the new
> > functions around Pl/PGSQL. All is stable and fast.
> >
> > Greetings from Germany,
> > Torsten
>
On Sat, Oct 17, 2
Any issues, has it baked long enough, is it time for us 8.3 folks to deal
with the pain and upgrade?
Anymore updates regarding 8.4 and slon 1.2 as well, since I usually
build/upgrade both at the same time.
Thanks
Tory
More update
If I run the concurrent re index locally (psql session) it works fine, but
when run via a connection through php I get the error
Can't be slon, since I can do this locally, but why would postgres have an
issue with a remote connection?
the basic script:
$connectString = "host=serve
Hey all, it's been a bit however I'm running into some issues with my
concurrent index
Always get this error during a concurrent index.
*2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(46706) ERROR:
deadlock detected*
*2009-10-07 22:18:02 PDT admissionclsdb postgres 10.13.200.70(
On Fri, Oct 17, 2008 at 11:00 PM, Віталій Тимчишин <[EMAIL PROTECTED]> wrote:
>
>
> 2008/10/17 Tory M Blue <[EMAIL PROTECTED]>
>>
>> The real issue is my index growth and my requirement for weekly
>> re-indexing (which blocks and therefore is more or less a
On Fri, Oct 17, 2008 at 10:35 AM, Kevin Grittner
<[EMAIL PROTECTED]> wrote:
> As already pointed out, that's a lot of free space. You don't use
> VACUUM FULL on this database, do you? That would keep the data
> relatively tight but seriously bloat indexes, which is consistent with
> your symptoms
On Fri, Oct 17, 2008 at 10:02 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 17, 2008 at 10:47 AM, Tory M Blue <[EMAIL PROTECTED]> wrote:
>> DETAIL: A total of 501440 page slots are in use (including overhead).
>> 501440 page slots are required to track
On Fri, Oct 17, 2008 at 9:30 AM, Kevin Grittner
<[EMAIL PROTECTED]> wrote:
>>>> "Tory M Blue" <[EMAIL PROTECTED]> wrote:
>
>> tell me if i should plan upgrades to 8.3.4..
>
> It's a good idea. It should be painless -- drop in and restart.
Good day,
So I've been running 8.3 for a few months now and things seem good.
I also note there are some bug fixes and you are up to 8.3.4 now, but
reading it I don't see that I'm being affected by anything, but please
tell me if i should plan upgrades to 8.3.4..
The real issue is my index growt
On Jan 27, 2008 4:08 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
>
> More segments means more disk space taken up with them and a longer crash
> recovery. Those are the downsides; if you can live with those there's no
> reason to run at <100 if that works for you. Fine-tuning here isn't
> really t
On Jan 24, 2008 10:49 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> 8.2.1 has a nasty bug related to statistics collection that causes
> performance issues exactly in the kind of heavy update situation you're
> in. That's actually why i asked for the exact 8.2 version. You should
> plan an upgrade
On Jan 23, 2008 4:31 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> Generally if you have a system doing many updates and inserts that slows
> for that long, it's because it hit a checkpoint. I'm not sure what your
> memory-related issues are but it's possible that might be from a backlog
> of sessi
I'm not sure what is going on but looking for some advice, knowledge.
I'm running multiple postgres servers in a slon relationship. I have
hundreds of thousands of updates, inserts a day. But what I'm seeing
is my server appears to "deallocate" memory (for the lack of a better
term) and performanc
91 matches
Mail list logo