On Thu, Nov 15, 2012 at 2:44 PM, Merlin Moncure wrote:
>>> select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout)
>>> select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout)
>>> select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout)
>>> select(0, NULL, NULL, NULL, {0, 2000}) = 0 (Timeout)
>>> select
On Fri, Nov 16, 2012 at 8:21 AM, Merlin Moncure wrote:
> On Fri, Nov 16, 2012 at 9:52 AM, Vlad wrote:
>>
>>> *) failing that, LWLOCK_STATS macro can be compiled in to give us some
>>> information about the particular lock(s) we're binding on. Hopefully
>>> it's a lwlock -- this will make diagnos
On Sun, Nov 18, 2012 at 4:35 PM, Vick Khera wrote:
> On Sun, Nov 18, 2012 at 7:24 PM, Craig Ringer wrote:
>>
>> On 11/19/2012 12:57 AM, Vick Khera wrote:
>>
>>
>>
>> On Sun, Nov 18, 2012 at 2:29 AM, Tom Lane wrote:
>>>
>>> I'd generally recommend using "text" if you don't have any interest in
>>
On Fri, Nov 16, 2012 at 12:13 PM, Vlad wrote:
> ok, I've applied that patch and ran. The stall started around 13:50:45...50
> and lasted until the end
>
> https://dl.dropbox.com/u/109778/postgresql-2012-11-16_134904-stripped.log
That isn't as much log as I expected. But I guess only the tip of t
On Tue, Nov 20, 2012 at 8:03 AM, Merlin Moncure wrote:
> On Tue, Nov 20, 2012 at 9:02 AM, Shaun Thomas
> wrote:
>> On 11/16/2012 02:31 PM, Merlin Moncure wrote:
>>
>>> no single thing really stands out -- contention is all over the place.
>>> lwlock, pinbuffer, dynahash (especially). I am again
On Tue, Nov 20, 2012 at 2:26 PM, Shaun Thomas wrote:
> On 11/20/2012 04:08 PM, Jeff Janes wrote:
>
>> Shaun Thomas reports one that is (I assume) not read intensive, but
>> his diagnosis is that this is a kernel bug where a larger
>> shared_buffers for no good reason cause
On Wed, Nov 21, 2012 at 7:29 AM, Vlad Marchenko wrote:
> update on my problem: despite pgbouncer, the problem still occures on my
> end.
As Merlin asked, how big is the pool? Maybe you are using a large
enough pool so as to defeat the purpose of restricting the number of
connections.
> Also,
On Thu, Nov 22, 2012 at 7:29 PM, wd wrote:
> Thanks for your reply, the logs are something like bellow,postgres will
> restore every wal log I put in the xlog directory,and then continues waiting
> for next wal log. The postgres version is 9.1.6.
>
> [2012-11-22 18:49:24.175 CST 25744 50ae0334
On Tue, Nov 20, 2012 at 9:05 AM, Merlin Moncure wrote:
> On Tue, Nov 20, 2012 at 10:50 AM, Jeff Janes wrote:
>>
>> I wouldn't expect so. Increasing shared_buffers should either fix
>> free list lock contention, or leave it unchanged, not make it worse.
>
> AIUI,
On Wed, Nov 21, 2012 at 9:05 AM, Vlad wrote:
> it's session mode and the pool size is 1200 (cause I need to grantee that in
> the worst case we have enough slots for all possible clients),
Wouldn't the clients prefer to wait 100ms to get a connnection if that
means their query finishes in 100ms,
On Sat, Nov 24, 2012 at 6:00 AM, wd wrote:
> Yes, you are right, after set the two command, the recovery will stop at
> that time.
>
> But there is an other question, how to make this recovered Postgres can be
> read and write? According to the manual, Postgres should be rename
> recovery.conf to
On Tue, Nov 20, 2012 at 12:00 PM, Merlin Moncure wrote:
> On Tue, Nov 20, 2012 at 12:16 PM, Jeff Janes wrote:
>>
>> The freelist should never loop. It is written as a loop, but I think
>> there is currently no code path which ends up with valid buffers being
>> on
On Wed, Nov 21, 2012 at 8:14 AM, Vlad wrote:
>
>
> '-M prepared' produces normal results, while '-M simple' results in 40% sys
> cpu. '-M extended' is somewhere in between.
> I'm running it as 60 clients, 2 threads.
2 threads is pretty low for 60 clients. What happens if you increase
-j to eith
On Sat, Nov 24, 2012 at 3:44 PM, wd wrote:
>>
>> What entries are you getting in the log file?
>>
> Logs are something like this:
>
> [2012-11-24 21:51:33.374 CST 583 50b0d0e5.247 1 0]LOG: database system
> was shut down in recovery at 2012-11-24 21:51:32 CST
> [2012-11-24 21:51:33.375 CS
On Mon, Nov 26, 2012 at 12:23 PM, Kevin Grittner wrote:
> Jeff Janes wrote:
>
>> FATAL: requested recovery stop point is before consistent recovery point
>>
>> I don't understand why are you not getting this message.
>
> Is it before the point where pg_stop_b
On Tue, Nov 27, 2012 at 6:59 AM, Kevin Grittner wrote:
> wd wrote:
>
>> the time is between backup start and stop.
>
> That is the problem -- until the point where pg_stop_backup() was
> run PostgreSQL can't be sure of having a consistent database. It is
> waiting from enough WAL to get it there.
On Tue, Nov 27, 2012 at 6:59 AM, Kevin Grittner wrote:
> wd wrote:
>
>> the time is between backup start and stop.
>
> That is the problem -- until the point where pg_stop_backup() was
> run PostgreSQL can't be sure of having a consistent database.
In 9.2, it seems to be willing to give it a shot
On Thu, Nov 29, 2012 at 8:11 AM, Schnabel, Robert D.
wrote:
> My question is whether or not the “ALTER TABLE gen1011 CLUSTER ON
> xgen1011_si_sn” actually clusters the table at that point or if it just
> tells it to use that index for clustering?
It just marks it to use that index if/when it is
On Thu, Nov 29, 2012 at 9:33 AM, Shaun Thomas wrote:
> On 11/29/2012 11:28 AM, Mike Christensen wrote:
>
>> It's always kinda annoyed me that the CLUSTER command in Postgres
>> doesn't work like it does on Microsoft SQL.
>
>
> It's a natural side-effect of MVCC, unfortunately. Data goes wherever i
On Fri, Nov 30, 2012 at 5:22 AM, Henry Drexler wrote:
> Hello, and thank you in advance.
>
>
> Beyond the date vs timestamp troubleshooting I did, I am not sure what else
> to look for, I know the increase of rows will have some affect but I just
> don't think the query should go from 4 minutes to
On Fri, Nov 30, 2012 at 12:22 PM, Henry Drexler wrote:
> On Fri, Nov 30, 2012 at 1:42 PM, Jeff Janes wrote:
>>
>> Can you report the EXPLAIN (ANALYZE, BUFFERS) instead?
>
>
> Thanks, here they are:
>
> for the approx 65 million row approx 50 min version:
>
>
On Wed, Dec 5, 2012 at 5:17 AM, James Cowell wrote:
> I'm using pg_bulkload to load large amounts of CSV data into a postgres
> database hourly.
>
> This database is replicated to a second node.
pg_bulkload is fundamentally incompatible with PITR, streaming, and
forms of replication that depend o
On Wed, Dec 5, 2012 at 7:02 AM, Gauthier, Dave wrote:
> Actually, maybe it didn't work. What's wrong with this picture...
>
>
> sde=# alter default privileges for user "select" grant select on tables to
> "select";
Remove the 'for user "select"'
What that does is make the default permissions ap
On Fri, Dec 7, 2012 at 1:58 PM, Ryan Kelly wrote:
> I have a very large query that also touches quite a bit of data. It
> generates a large temporary file (actually, several, because the total
> size is about 4.5GB). I'm wondering which part of the query's plan
> causes these files to be created.
On Fri, Nov 30, 2012 at 6:44 AM, François Beausoleil
wrote:
>
> How come no new restart points were achieved? I had 4008 WAL archives on my
> slave. I expected them to be removed as streaming replication progressed. Are
> restart points prevented while long queries are running?
They can be pre
On Sat, Dec 8, 2012 at 5:54 AM, Condor wrote:
> I am interested to know where is my mistake or something wrong
> with server which I doubt. Here is my current query with explain:
> (I change names to XXX YYY ZZZ because original names is written on CP1251
> and most ppl in list can't read them)
>
On Sun, Dec 9, 2012 at 2:36 AM, Thomas Kellerer wrote:
> I recently upgraded from 9.2.1 to 9.2.2 and my unit tests for my application
> are failing when cleaning up my test database.
>
> I am using the statement:
>
>drop owned by wbjunit cascade;
>
> at the end of a test suite to get rid of ev
On Mon, Dec 3, 2012 at 5:56 AM, Henry Drexler wrote:
> On Sun, Dec 2, 2012 at 12:44 AM, Jeff Janes wrote:
>>
>> Could you do it for the recursive
>> SQL (the one inside the function) like you had previously done for the
>> regular explain?
>>
>> Cheers,
On Sun, Dec 9, 2012 at 10:59 PM, Condor wrote:
> On 2012-12-10 00:31, Jeff Janes wrote:
>>
>> On Sat, Dec 8, 2012 at 5:54 AM, Condor wrote:
>>
>>> create index clients_tbl_firstname_idx on clients_tbl using btree
>>> (firstname
>>> COLLATE "
On Wed, Dec 5, 2012 at 5:17 AM, James Cowell wrote:
> I'm using pg_bulkload to load large amounts of CSV data into a postgres
> database hourly.
>
> This database is replicated to a second node.
>
> Whenever a bulk load happens the indexes on the updated tables on the
> secondary node corrupt and
On Mon, Dec 10, 2012 at 12:26 PM, Mihai Popa wrote:
> Hi,
>
> I've recently inherited a project that involves importing a large set of
> Access mdb files into a Postgres or MySQL database.
> The process is to export the mdb's to comma separated files than import
> those into the final database.
>
On Tue, Dec 11, 2012 at 9:37 AM, Eng. AlSamman wrote:
> Hello everyone,
>
> I am trying to implement a high-availability cluster using only two nodes,
> without any shared disk storage.
>
> In my implementation, the primary database has continuous archiving set up
> to a directory residing on the
601 - 632 of 632 matches
Mail list logo