U at 100%.
I never saw any resolution to this thread - were the original tests on the
Opteron and OS X identical, or were they two different workloads?
Wes
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
threads on
various linux operating systems. The difference in our application by
simply relinking at run time (LD_PRELOAD) with libtcmalloc was astounding.
Wes
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
E had some advantages, but can't
find the information. Is there a performance hit with REINDEX during
creation because of locking issues?
2. I'm assuming REINDEX would avoid the time involved in recreating the
foreign key constraints?
3. With a REINDEX DATABASE, how can I mo
On 1/24/08 12:48 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Wes <[EMAIL PROTECTED]> writes:
>> I'm running 8.1.4. Assume I have exclusive access to the DB.
>
> You really ought to update to 8.1.something-newer, but I digress.
I was planning on upgradi
x27;re doing large operations like
> this.
I have checkpoint_segments set to 60, and no warnings showing up in the log.
Wes
---(end of broadcast)---
TIP 6: explain analyze is your friend
Just a follow-up on this... The REINDEX took about 2 1/2 days. I didn't
gain much disk space back - a full backup takes just as long as before, but
the vacuum time dropped from 30 hours to 3 hours.
Wes
>> 1. Is there any advantage to doing the DROP/CREATE over just doing a REINDEX
d that is fixed in
8.2 or 8.3 (don't recall which I saw it in), but have never gotten
confirmation from anyone on that.
Wes
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index s
nce vacuum has to
sequentially read the entire database, I would have expected a linear
increase - about 1.5 hours now.
There are currently no deletes or modifies to the database - only inserts.
This is on PostgreSQL 7.4.5, RedHat ES 3.0.
Wes
---(end of
allback.
> Again, VACUUM VERBOSE info would be informative (it's sufficient to look
> at your larger tables for this).
I'll set that up to run tonight and see if it gives any clues. Last night,
vacuum ran over 5 hours.
Wes
---(end of broadcast)---
uld that have
anything to do with the non-linear behavior?
Wes
---(end of broadcast)---
TIP 8: explain analyze is your friend
INFO: "blah": found 0 removable, 366326534 nonremovable row versions in
3241829 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
Wes
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
SM size: 1000 relations + 100 pages = 5920 kB shared
>> memory.
>
> Well, you don't have a problem with FSM being too small anyway ;-)
Nope... Preparation for when deletes start kicking in down the road. If I
can only do a vacuum once a week, I've got to have lots of spac
s on a separate volume.
I thought it was a 2.6 kernel, but it looks like it is 2.4.20. I need to
monitor the system when the vacuum is running to see if sar/top show
anything. I wonder if it's hitting the kswapd thrashing problem?
Wes
---(end of broadcast)---
feet in releasing hardware monitoring compatible with 2.6
kernel. So, we're going to try a 2.4.29 kernel and hope that the problem is
fixed there. With any luck, by Friday I'll know if the kswapd problem is
fixed in 2.4.29 and if that solves the excessive vac
s time.
I guess the next step is to try reindexing a couple of the big indexes and
see if that helps.
Wes
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
On 3/30/05 12:09 PM, "Adam Siegel" <[EMAIL PROTECTED]> wrote:
> How can we physically recover "real" disk space for the rows that were
> deleted from the table?
vacuum full
Wes
---(end of broadcast)---
TIP 8: explain analyze is your friend
at vacuum processes indexes in index order, not physical disk
order. I guess we add a periodic reindex to our maintenance procedures...
Wes
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
indexes in order should be much better.
Wes
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
now contains 393961361 row
versions in 2435100 pages
INFO: index "message_recipients_i_message" now contains 393934394 row
versions in 1499853 pages
After reindex:
INFO: index "message_recipients_i_recip_date" now contains 401798357 row
versions in 1765613 pages
INFO: index &quo
-Usnet gateway such that the
originating email addresses are either removed or scrambled so that posting
to the mailing list doesn't result in your email address being plastered all
over Usenet? People that intentionally post to Usenet generally don't use a
replyable email add
System: Mac OS X 10.4.2
PostgreSQL: 8.1 b2
Running 'make check', I get the following failure:
rm -rf ./testtablespace
mkdir ./testtablespace
/bin/sh ./pg_regress --temp-install --top-builddir=../../..
--temp-port=55432 --schedule=./parallel_schedule --multibyte=SQL_ASCII
--load-language=plpgsql
=
On 10/2/05 7:48 AM, "William ZHANG" <[EMAIL PROTECTED]> wrote:
> Yes, the Makefiles cannot deal with spaces correctly.
> Seems we should avoid use the `complicated' path.
Such paths are normal on systems with a GUI interface. They are not out of
the ord
w. To resolve my immediate
problem, I can just temporarily rename my hard drive. I'm just reporting
this as a problem that should be fixed.
Wes
install.log has:
make -C doc install
make[4]: Nothing to be done for `install'.
make -C src install
/bin/sh ../config/mkinstalldirs &qu
illions of documents).
Is anyone aware of any such solutions for PostgreSQL, open source or
otherwise?
Thanks
Wes
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
We reindex when it starts taking 24 hours to
vacuum.
The above referenced posting is from May 1, 2006. Is pgsql 8.2 the first
release to have this code?
Wes
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
We reindex when it starts taking 24 hours to
vacuum.
The above referenced posting is from May 1, 2006. Is pgsql 8.2 the first
release to have this code?
Wes
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
also saw kswapd going nuts with the 2.4.21
kernel, destroying system performance. The only solution we found was to
upgrade to a 2.6 kernel. The problem has not reoccurred since then. Our
database is currently about 67 gigabytes and is growing at about 1 million
records per day. We are using Postgresql
ead against the wall trying to find the problem.
Wes
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
r database, it took about 3 hours to
import 265 million rows of data. It then took another 16 hours to rebuild
all the indexes. I think the entire pg_dumpall/reload process took about 21
hours +/-. I wonder what it will be like with 1.5 billion rows...
Wes
-
than available RAM though. How can I check index sizes?
We already have pg_xlog on a dedicated mirrored disk. Would it help
significantly to give pgsql_tmp its own mirrored disk? PGDATA is on an 8
disk hardware RAID 5.
Wes
---(end of broadcast)-
r than invent a new one.
Of course, in Oracle 'count(*)' is instantaneous. It doesn't have to count
the physical records one by one.
Wes
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
ever, my
experience has been that count(*) on Oracle is a whole lot faster than
PostgreSQL - what appeared instantaneous on Oracle took some time on
PostgreSQL. That was one of the first things I noticed when moving a
database application to PostgreSQL. I've sinc
analyze provides) in a very short period. When you've
got a relatively wide table that has hundreds of millions to over a billion
rows, and you need to report on how many rows in the table, that can take a
long time.
Wes
---(end of broadcast)---
in a given day (or even week), that isn't accurate
enough.
Wes
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
arter is called. I guess
the best you can do is have a startup script that notifies you if the values
are wrong.
Wes
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
On 1/20/05 10:27 PM, "Jonel Rienton" <[EMAIL PROTECTED]> wrote:
> have you tried using /etc/sysctl.conf and saving the shmax value there?
Unfortunately, the -p parameter does not appear to be valid, nor does
'strings -a' show 'conf' in the
eplied to it). The previous statement had been that they had to be
in /etc/rc because startup scripts were now too late in the boot process. I
was just clarifying the reason.
Wes
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choo
s just
working space to more efficiently build the initial constraint, or does it
actually write this to the database?
Wes
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command
try to create it's own
index anyway.
Is there something I'm overlooking, or is this a bug?
Wes
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED]
les, the data disk was still the heaviest hit
(expected no or little access there), and it beat the living daylights out
of my swap - pageins/outs like crazy. The I/O on the index disks was
negligible compared to the data and swap disks. I won't try that again...
Wes
---
..
I'm not going to have to "initdb --locale=C" and am I? I looked at index
classes, and that doesn't appear to be something I want to do, due to
performance. What kind of performance hit do you actually take by using an
index class?
Wes
Pg_controldata shows:
Maximum leng
oad the database (takes a weekend),
can I define two indexes on the same field, one using operator classes and
one not (and have them automatically used as appropriate)?
Because of the time involved, I'm trying to hold off on another reload until
we upgrade to 8.x.
Wes
--
goes wrong with the
load process.
I've never quite understood why a READ of a record with a foreign key
reference results in the referenced record being locked with more than a
shared lock.
Wes
---(end of broadcast)---
TIP 1: if posting/
w-level locking."
So, until 8.1 PostgreSQL had "something better than row-level locking" for
some things, but no row locking when needed? Or was it row locking is
there, but just no shared row locking?
Wes
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ng our application, it takes a full weekend to do
the dump/load - the database is just under a billion rows.
Wes
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
nce on starting one. The database is just a hair under one billion
rows, and could take the entire weekend or more to rebuild.
Wes
---(end of broadcast)---
TIP 6: explain analyze is your friend
the most offending indexes to separate drives probably isn't an option.
Wes
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
with 8.x. See:
<http://archives.postgresql.org/pgsql-committers/2003-11/msg00278.php>
What is the correct way to work around this restriction? Set
LD_LIBRARY_PATH?
This is on a variety of unix platforms.
Wes
---(end of broadcast)---
TIP 2:
y can't find the libraries if
they are not in the default library search path of the user.
Wes
---(end of broadcast)---
TIP 6: explain analyze is your friend
this restriction?
>
> Use --with-includes and --with-libraries as needed.
That doesn't get the library paths into the binary. If the libraries are
not in the default system search path, the user is screwed. Is there a way
to solve this? I think the previous --with-openssl=path set the
xists(select 1 from
B where BField=addresses.address_key limit 1) ) and ( not exists(select 1
from C where CField=addresses.address_key limit 1) )
Of course, all fields above are indexed.
There are foreign key references in B and C to A. Is there some way to
safely leverage that?
SubPlan
-> Limit (cost=0.00..0.81 rows=1 width=0)
-> Index Scan using messages_i_orig_mdate on messages
(cost=0.00..35148.46 rows=43301 width=0)
Index Cond: (originator = $0)
Which seems like it should be much more efficient.
Wes
---(en
I'm pondering dumping the keys for A to a file, and B and C to another file,
sorting with uniqueness on the B/C file, then programmatically determining
which keys have been freed. I'm pretty sure this will be much faster, but I
had hoped to avoid an external process
calls instead of ECPG)?
Wes
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
s. However,
pgsqlNoticeReceiver is never called.
What am I missing? I need to abort the transaction on any errors in the
COPY.
Wes
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-
ts for the next result from a prior PQsendQuery, PQsendQueryParams,
PQsendPrepare, or PQsendQueryPrepared call, and returns it."
(doesn't mention COPY).
I have it working now. Thanks.
Wes
---(end of broadcast)---
TIP 1: if posting/reading through U
t vendor throws such a clause into all their licensing agreements.
Sounds suspiciously like Microsoft...
I'm not a lawyer, but I believe things such as what vendors a government
entity is using is required by law to be public information.
Wes
---(end of broadcast
in the archives with krb5 and
threads. Am I missing something here?
Wes
---(end of broadcast)---
TIP 6: explain analyze is your friend
d .h files are for your OS.
>
> This configure attempt could be failing, because it can't locate the
> correct thread headers and/or libraries
Why would I not want to specify enable-thread-safety? I want to be able to
write threaded programs.
--enable-thread-safety works fine un
8 byte sequence detected
I'd prefer not to add to the overhead by pre-validating every string, since
PostgreSQL validates it already. Is there a way to get the server to
replace invalid characters with something like blank instead of generating a
fatal error?
Wes
g from UTF-8.
I thought about that, but I do want to allow UTF-8 to be stored. I just
want it to replace illegal characters with some valid character so that
invalid records will load. Even if I preprocess the data, I can do no more
than that.
Wes
---(end of broadcast)-
> I don't believe it will discard anything on import if the database is
> SQL_ASCII encoded.
That might be worth a shot. I don't really understand the ramifications,
though, especially given Tom's warning. I guess as long as I don't care
abo
have Postgres do it again. But maybe
there's no choice.
Wes
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
A posting from December 2005 against 8.1.0 (bug #2114) seems to indicate
this should be fixed.
Am I missing something, or is there still a problem? Do I still need to
change '\.\ to '\\.'?
Should I be using PQescapeString on strings being passed to COPY?
Wes
ne table per 6 hours?)
Thanks
Wes
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
ystem catalog monkey business - just plain old SQL.
Wes
---(end of broadcast)---
TIP 6: explain analyze is your friend
to hang.
3. Create header_dummy using psql
4. Drop header_dummy
5. Run application - works.
I can repeat this with the 'detail' table. It is 100% reproducible.
What's going on?
Wes
---(end of broadcast)---
TIP 3:
oblem - or to be 100% safe should I not do the 'drop table'?. I was
afraid I might have to move the 'create table' outside of the transactions.
Many thanks
Wes
---(end of broadcast)---
TIP 1: if posting/reading throu
<http://archives.postgresql.org/pgsql-hackers/2002-07/msg00969.php>
How can I safely build indexes in parallel?
At this point, I'm only trying to build two at a time. I will be building
indexes for tables with any where from a few rows to 100 million rows on a
daily basis - I need to maximize p
en_US.UTF8. Leading
substring searches result in a sequential search instead of an indexed
search. Do I still have to initdb to locale=C, or is there a better option
now?
Wes
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner
ate two
indexes for each column - one with and one without the operator class. That
is also what was indicated in the original thread. Defining multiple
indexes on a given column isn't feasible, due to the database size (100
million rows per day).
Wes
---
story'.
Neither that file, nor any *.history file, is anywhere to be found. I can
not find this documented anywhere. What is this file? Does it have to do
with timelines?
Wes
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
ke to eliminate that second 40 hours so I can get
it down to a normal weekend.
Wes
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
d RAID 1 on a third channel). There are two 2.4 GHz Xeon
processors).
Wes
---(end of broadcast)---
TIP 6: explain analyze is your friend
, and
those added as a primary key constraint. Currently, pg_dump outputs the FK
constraints after the indexes are built, as the last steps. If I try to add
the FK constraints after loading the database definitions, but without any
indexes, I'm not sure what would
irtual.
Adding foreign key constraints, RSS is about 1.1 GB but virtual is slightly
over 2 GB.
Wes
---(end of broadcast)---
TIP 6: explain analyze is your friend
a
database that has referential integrity enabled. Even if there were an
error that crept in to the old database, I don't care - just add the
constraint so I can get back online. Right now I'm looking at around 80-90
hours total to do a reload. What's it going to be a year or
backup. I'll be
switching to online backups once we get upgraded, but if a reload fails
there, I'll again have to fall back to the weekly source backup.
Wes
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
kind of hardware
problem we encountered last year is rare, but it does happen. I've seen
similar things many times over the years. RAID doesn't help you when a
controller mirrors garbage.
> You should look into slony. You can replicate from one version of pgsql
> to another, a
n out.
That looked like a good shot.
Wes
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
index (be it a primary key, or not), which
> prevents
> my plan from working :-(
That was a great idea - too bad it didn't pan out.
I don't suppose there's any (reasonable) way to directly insert into the
system tables to create the constraint? I could knock almost 2 days off of
much they'd charge for it...
Or if I get ambitious, dig into the code myself if I can figure out where to
start...
Wes
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
; checking in restore has been improved...
Yes, that is what I did. I'm in the process of testing an upgrade from
7.3.x to 8.1.4 - export from 7.3.x and import into 8.1.4. Unfortunately,
I'm sitting at about 90 hours when I've got about an 80 hour window on a
long weekend...
Wes
ecord that job 1 may
already have created in its transaction).
3. Not conflict with foreign key reference locks
SHARE does not appear to be appropriate - it would fail #2. Maybe "SHARE
UPDATE EXCLUSIVE"?
Wes
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
e row locks before were single owner and were such that a
> second lock request for the same row would wait for the first to be
> released. Now effectively you have two levels of locks at the row level,
> the weaker of which conflicts with the stronger but not with itself. The
> thing about MVCC is that readers do not have to get either lock if they
> aren't trying to prevent modifications.
Wes
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
confirmation. Is there any stronger lock that would not
block SELECT foreign key references? I didn't find any documentation on
what type of lock is grabbed by a when a foreign key is referenced during
SELECT (or other).
Wes
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
x27;t address the overall database integrity.
Back to the original question... How can I verify the complete integrity of
a database - especially a very large one where a reload or full index
rebuild could take on the order of days?
Wes
---(end of broadcast)-
it, or whatever
doesn't really matter to the original question. The database can become
corrupt. How can I tell that a database is fully intact at any given point
in time? If I reload from a system backup before the known corruption, how
can I be sure that the o
On 9/5/04 9:04 AM, "Jan Wieck" <[EMAIL PROTECTED]> wrote:
> 24 hours to do what? The pg_dumpall, the restore or both?
There's more than 250 million rows. If I remember right, it's ballpark 25%
data reload, 75% index/foreign constraint rebuild. Pg_dumpall is somet
expect there are only a handful of huge databases running a heavy load -
the vast majority are probably tens no larger than 10's (maybe hundreds) of
megabytes, or do not experience a heavy update load?
Wes
---(end of broadcast)---
TIP 2: you ca
> on 100Mbit).
Mine are already set to 30, but in a recent re-index, I saw warnings in the
log so need to bump it up some more. I'm going to likely be doing a reload
this weekend, so will include this in the run.
Wes
---(end of broadcast)---
ow IN SELECT * FROM thingy
LOOP
RETURN NEXT row,[SELECT id FROM thingy WHERE parent_id = id]
END LOOP,
RETURN
Any help much appreciated,
Wes
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 11/9/2011 7:34 PM, David Johnston wrote:
> On Nov 9, 2011, at 20:19, Wes Cravens wrote:
>
>> I have an adjacency list kind of table
>>
>> CREATE TABLE thingy (
>>id int,
>>parent int
>> );
>>
>> I'd like to be able to write a
On 11/9/2011 7:19 PM, Wes Cravens wrote:
> I have an adjacency list kind of table
>
> CREATE TABLE thingy (
> id int,
> parent int
> );
>
> I'd like to be able to write a procedural function that returns a row or
> rows from this table with an ap
On 11/10/2011 12:05 PM, David Johnston wrote:
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Wes Cravens
> Sent: Thursday, November 10, 2011 11:54 AM
> To: pgsql-general@postgresql.org
> Sub
ize).
We're going to try to test the 2.4.29 kernel tomorrow.
Wes
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
What
platform? Solaris, FreeBSD, Linux, Windows
-Original Message-From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]On Behalf Of Michael
UvhagenSent: Friday, October 21, 2005 6:10 AMTo:
pgsql-general@postgresql.orgSubject: [GENERAL]
passwordHi.I was installing po
For what it may be worth, executing the same commands into MySQL
5.0.15-nt-max (Win XP Pro) the following it received:
mysql> create table test (i1 int);
Query OK, 0 rows affected (0.41 sec)
mysql> insert into test values (123913284723498723423);
ERROR 1264 (22003): Out of range value adjusted fo
set sql_mode='MYSQL323';
Query OK, 0 rows affected (0.00 sec)
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Scott Marlowe
Sent: Tuesday, October 25, 2005 3:24 PM
To: Wes Williams
Cc: 'Jan'; pgsql-general@postgresql.org
Subject: Re: [GE
Type the following at the Windows command prompt (start, run, "cmd"):
convert c: /fs:ntfs /v
It will complain about locked files and perform the convert at the next
reboot, which you should do immediately.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Jos
1 - 100 of 118 matches
Mail list logo