a different parts of the dataset, so I
wouldn't expect
them to hit the same page within a 3 minute period, but another big part
is about
updating a big GIN-index and that would most likely get some benefits if
the interval
could be pushed higher.
Can the log-volume be decreased by tuning in
ed thirty four') @@ to_tsquery('hun');
> does not return anything.
Try to_tsquery('hun:*')
Jesper
Hi.
Is there any way I can force explicit use of transactions on
insert/updates?
I do have users sitting on the system, that may forget to code it that
way and it would be nice to be able to force the database to just kick
them out if it didn't happen.
--
Jesper
--
Sent via pgsql-
pg to do just that? And reach consistent state
after that.
Should I manually craft a backup_label file?
thanks.
Jesper .. and yes, I'll make sure that the exitstatus from
pg_start_backup gets propagated
correctly back from now on..
--
Sent via pgsql-admin mailing list (pgsql-admin@post
Hi List
What are your production experience with the pg_lesslog module, that
compresses the WAL log?
I would love to use it, but it would be nice with some reports of people
having used it in production for some time...
Thanks.
--
Jesper
--
Sent via pgsql-admin mailing list (pgsql-admin
On 2011-10-09 17:41, Tom Lane wrote:
Jesper Krogh writes:
I have got a corrupt db.. most likely due to an xfs bug..
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid page header in block
14174944 of relation base/16385/58318948
Can I somehow get pg_dump to "i
Hi.
I have got a corrupt db.. most likely due to an xfs bug..
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid page header in block
14174944 of relation base/16385/58318948
Can I somehow get pg_dump to "ignore" that block and dump everything els
had reached.
Like "show max_connections"
Jesper
you when it grows significantly.
--
Jesper
--
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
uot;you can do the choice based
on cost alone.. DAS is soo-much-cheaper". In the comparison, I get
an equal amount of disks with same characteristica, in the same
raid-configuration. 1-2GB Battery backed cache on each. So on
paper, I think the systems are directly comparable. In the real
worl
drives.
The first is 8gbit Fibre Channel, the last is 3Gbit DAS SAS. The
fibre channel version is about 20% more expensive pr TB.
So of course it is a "fraction of the cost of a SAN", but it is a
fairly small one.
--
Jesper
--
Sent via pgsql-admin mailing list ([email protected]
n not.
Use Londiste or Slony.
Or just a dump/restore.. it is most likely less than 24 hours.. so if you
can justify that much down-time (read-only access), then that is the
easiest path.
--
Jesper
--
Sent via pgsql-admin mailing list ([email protected])
To make changes to
On 2010-05-21 00:04, Greg Smith wrote:
Jesper Krogh wrote:
> A Battery Backed raid controller is not that expensive. (in the
> range of 1 or 2 SSD disks). And it is (more or less) a silverbullet
> to the task you describe.
Maybe even less; in order to get a SSD that's reli
ntroller is way more easy to get right.
... if you had a huge dataset you were doing random reads into and
couldn't beef your system with more memory(cheapy) SSD's might
be a good solution for that.
--
Jesper
--
Sent via pgsql-admin mailing list ([email protected])
To make cha
backup the updates, so
> on.
This is conceptually PITR.
--
Jesper
--
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
u have enabled PITR and tell the database that you do so:
http://www.postgresql.org/docs/8.4/static/continuous-archiving.html
Works excellent..
Jesper
--
Sent via pgsql-admin mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
find /path/to/walarchive/ -type f -ctime 1 | xargs rm -rf
The data catalog is quite big..~200GB thus the inital backup of that
will take quite some time. So if there has been activity in the database
over this amount of time will it still be possible to bring the db to a
concistent state using the
-
job | 2008-04-16 07:19:24.832413+02 | 2008-04-17 01:40:05.242914+02
So it seems to be running. But shouldn't it hit the table a lot more?
How do I check that it is somehow near optimal?
--
Jesper
--
Sent via pgsql-admin mailing list ([email protected])
To make
Tom Lane wrote:
> Jesper Krogh <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> Drop the constraints in the source database.
>
>> That would be my workaround for the problem. But isn't it somehow
>> desirable that pg_dumpall | psql "allways&q
Tom Lane wrote:
> Jesper Krogh <[EMAIL PROTECTED]> writes:
>> The tables are running a "home-made" timetravelling feature where a
>> contraint on the table implements the foreing keys on the table.
>
> You mean you have check constraints that do selects on o
a
contraint on the table implements the foreing keys on the table.
How can I instruct pg_dumpall to turn off these constriants during
dump/restore?
Jesper
--
Jesper Krogh, [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Is it possible to migrate from 7.4.3 - i386 to 7.4.6 - x86-64 without a
dump and restore?
Jesper
--
./Jesper Krogh, [EMAIL PROTECTED]
Jabber ID: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an
22 matches
Mail list logo