On Mar 12, 2013, at 8:09 PM, Joe Van Dyk wrote:
> On Mar 12, 2013, at 8:42 AM, Perry Smith wrote:
>
>>
>> The other thought is perhaps there is a "snap shot" type concept. I don't
>> see it in the list of SQL commands. A "snap shot" would do exactly what it
>> sounds like. It would take
To all who replied:
Thank you. I did typo. I meant "transaction" instead of truncate.
I had not seriously considered pg_dump / pg_restore because I assumed it would
be fairly slow but I will experiment with pg_restore and template techniques
this weekend and see which ones prove viable.
I kn
On Mar 12, 2013, at 8:42 AM, Perry Smith wrote:
I tried posting this from Google Groups but I did not see it come through
after an hour so this may be a duplicate message for some.
The current testing technique for things like Ruby On Rails has three
choices but all of the choices will not work
Erik Jones writes:
> What's the best way to determine the age of the current WAL? Not the current
> segment, but the whole thing. Put another way: is there a way to determine
> a timestamp for the oldest available transaction in the WAL?
Transaction commit and abort records carry timestamps,
On 12 March 2013 21:59, John R Pierce wrote:
> On 3/12/2013 2:31 PM, Gregg Jaskiewicz wrote:
>
>> I was basically under impression that separating WAL is a big plus. On
>> top of that, having separate partition to hold some other data - will do
>> too.
>> But it sounds - from what you said - like
On 3/12/2013 2:31 PM, Gregg Jaskiewicz wrote:
I was basically under impression that separating WAL is a big plus. On
top of that, having separate partition to hold some other data - will
do too.
But it sounds - from what you said - like having all in single logical
drive will work, because raid
What's the best way to determine the age of the current WAL? Not the current
segment, but the whole thing. Put another way: is there a way to determine a
timestamp for the oldest available transaction in the WAL?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make c
Ok,
So by that token (more drives the better), I should have raid 5 (or
whichever will work) with all 6 drives in it ?
I was thinking about splitting it up like this. I have 6 drives (and one
spare). Combine them into 3 separate logical drives in mirrored
configuration (for some hardware redundan
2013-03-08 13:27:16 +0200 Emre Hasegeli :
PostgreSQL writes several following logs during the problem which I never
saw before 9.2.3:
LOG: process 4793 acquired ExclusiveLock on extension of relation
305605 of database 16396 after 2348.675 ms
I tried
* to downgrade to 9.2.2
* to disable a
2013/3/13 Alexander Farber :
> Unfortunately doesn't work -
>
> On Tue, Mar 12, 2013 at 5:53 PM, Ian Lawrence Barwick
> wrote:
>> 2013/3/13 Alexander Farber :
>>>
>>> I have a list of 40 non-english words,
>>> each on a separate line and in UTF8 format,
>>> which I'd like to put in the "word"
Unfortunately doesn't work -
On Tue, Mar 12, 2013 at 5:53 PM, Ian Lawrence Barwick wrote:
> 2013/3/13 Alexander Farber :
>>
>> I have a list of 40 non-english words,
>> each on a separate line and in UTF8 format,
>> which I'd like to put in the "word" column
>> of the following table (also in
2013/3/13 Alexander Farber :
> Hello,
>
> I have a list of 40 non-english words,
> each on a separate line and in UTF8 format,
> which I'd like to put in the "word" column
> of the following table (also in UTF8 and 8.4.13):
>
> create table good_words (
> word varchar(64) primary key,
>
Hello,
I have a list of 40 non-english words,
each on a separate line and in UTF8 format,
which I'd like to put in the "word" column
of the following table (also in UTF8 and 8.4.13):
create table good_words (
word varchar(64) primary key,
verified boolean not null default fals
On Mar 12, 2013, at 8:41 AM, Perry Smith wrote:
>
>
> One choice would be to create the database, use it, and then drop it for each
> test. I would create the database from a template that already has data
> taken from the production database (and probably trimmed down to a small
> subset o
On 03/12/2013 08:41 AM, Perry Smith wrote:
One choice would be to create the database, use it, and then drop it for each
test. I would create the database from a template that already has data taken
from the production database (and probably trimmed down to a small subset of
it). This requi
I tried posting this from Google Groups but I did not see it come through after
an hour so this may be a duplicate message for some.
The current testing technique for things like Ruby On Rails has three choices
but all of the choices will not work in my case.
The first choice is "truncate" whi
2013/3/13 Ian Lawrence Barwick :
> 2013/3/12 Gauthier, Dave :
>> Hi:
>>
>> v9.0.1 on linux.
>>
>> I have a table with a column that is a csv. Users will select records based
>> upon the existence of an element of the csv. There is an index on that
>> column but I'm thinking that it won't be of mu
An option would be to create a column of type tsvector. That way you could do
text searches using partial words or words and get results including those
containing forms of the word.
From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org]
o
2013/3/12 Gauthier, Dave :
> Hi:
>
> v9.0.1 on linux.
>
> I have a table with a column that is a csv. Users will select records based
> upon the existence of an element of the csv. There is an index on that
> column but I'm thinking that it won't be of much use in this situation. Is
> there a wa
Hi:
v9.0.1 on linux.
I have a table with a column that is a csv. Users will select records based
upon the existence of an element of the csv. There is an index on that column
but I'm thinking that it won't be of much use in this situation. Is there a
way to facilitate these queries?
Exampl
Hello, everybody
I have one problem and I need some help.
My environment: one master and one slave (PostgreSQL 9.2.2).
My cluster has about 160GB and pg_basebackup to syncronize them (master and
slave).
The sintax is below:
pg_basebackup -h productionaddress -p productionport -U productionuser
Paul Jungwirth wrote:
>> Out of curiosity: any reason the ORDER BY should be in the subquery? It
>> seems like it ought to be in
> the UPDATE (if that's allowed).
>
> Hmm, it's not allowed. :-) It's still surprising that you can guarantee the
> order of a multi-row
> UPDATE by ordering a subquer
22 matches
Mail list logo