I have been a long time user of mysql. Switching to Postgresql because
the true features included in 5.1 (as of this moment) are nothing to
write home about. The InnoDB stuff is highly advocated but it has its
own set of issues, and when one looks at things like backup/restore
etc, it is clearly ta
On 14/08/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
>
> > Though other threads I have learned that multiple inserts or updates
> > can be sped up with:
> >
> > [QUOTE]
> > - BEGIN TRA
Thank you AM. Very useful note, must appreciate the info you shared.
About COPY, I have two simple questions:
1. Is there anything like an ALTER DATABASE command? I would like to
change the character set without having to recreate the DATABASE
again!
2. Also, when I do a mysqldump I seem to be e
> You're confusing CHECK constraints and FOREIGN KEY constraints. They're
> different things ;)
>
> CHECK constraints verify that data in a certain column matches a certain
> condition. I'm not sure they can reference columns in other tables,
> unless you wrap those checks in stored procedures mayb
> If you issue a BEGIN then nothing gets committed until you issue a COMMIT. If
> anything happens in the meantime then everything you've done since the BEGIN
> disappears.
>
There are some cases where I would like to bunch queries into a
transaction purely for speed purposes, but they're not in
> You could do this with savepoints which are a kind of sub-transaction inside a
> "bigger" transaction.
>
> e.g.:
> BEGIN TRANSACTION;
>
> SAVEPOINT sp1;
> UPDATE1;
> IF (failed) rollback to savepoint sp1;
>
> SAVEPOINT sp1;
> UPDATE2;
> IF (failed) rollback to savepoint sp2;
Thanks Thomas, thi
I have a table with ten columns. My queries basically one column as
the first WHERE condition, so an index on that column is certain. But
the columns after that one vary depending on end-user's choice (this
is a reporting application) and so does the sorting order.
In MySQL world, I had sort_buffe
On 15/08/07, Chris Browne <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] ("Phoenix Kiula") writes:
> > I have a table with ten columns. My queries basically one column as
> > the first WHERE condition, so an index on that column is certain. But
> > the columns
On 15/08/07, Ivan Zolotukhin <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Imagine a web application that process text search queries from
> clients. If one types a text search query in a browser it then sends
> proper UTF-8 characters and application after all needed processing
> (escaping, checks, etc)
> you do a lot of queries like that and the id,s_id restriction isn't very
> selective you might look into tsearch2 which can index that type of query.
>
Thanks. Does tsearch2 come installed with 8.2.3? I am not techie
enough to do all the compiling stuff so I'm hoping it does! How can I
check?
I'm grappling with a lot of reporting code for our app that relies on
queries such as:
SELECT COUNT(*) FROM TABLE WHERE (conditions)...
And I still do not find, from the discussions on this thread, any
truly viable solution for this. The one suggestion is to have a
separate counts table,
On 15/08/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
>
> > I'm grappling with a lot of reporting code for our app that relies on
> > queries such as:
> >
> > SELECT COUNT(*) FROM TABL
In some examples posted to this forum, it seems to me that when people
execute queries in the psql window, they also see "90 ms taken"
(milliseconds), which denotes the time taken to execute the query.
Where can I set this option because I'm not seeing it in my psql
window on both Win XP and Linux.
On 15/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On 8/15/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> > On 15/08/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> > > "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> > >
> > >
> Yes, optimization. :) You don't need an exact count to tell someone
> that there's more data and they can go to it.
In general, I agree. But my example of Amazon was only to illustrate
the point about two queries and why they may be needed. I seem to see
many more pages than you do, but in any
> I think you're looking for the \timing command?
> http://www.postgresql.org/docs/8.2/static/app-psql.html
> (under meta-commands, about halfway down the page)
Thanks everyone. "\timing" it is!
Happy camper.
---(end of broadcast)---
TIP 5: don't
On 15/08/07, Ivan Zolotukhin <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Actually I tried smth like $str = @iconv("UTF-8", "UTF-8//IGNORE",
> $str); when preparing string for SQL query and it worked. There's
> probably a better way in PHP to achieve this: simply change default
> values in php.ini for t
Couple of questions with porting:
1. I have been playing around with my databases locally on Win XP so
as not to hurt our website traffic. Now I would like to move the
database to a Linux CentOS server. Can I use pg_dump on Windows and
pg_restore it on Linux? If so, any tips on what I should keep
On 16/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On 8/15/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> > Couple of questions with porting:
> >
> > 1. I have been playing around with my databases locally on Win XP so
> > as not to hurt our websit
> What, exactly, does that mean?
>
> That PostgreSQL should take things in invalid utf-8 format and just store
> them?
> Or that PostgreSQL should autoconvert from invalid utf-8 to valid
> utf-8, guessing the proper codes?
>
> Seriously, what do you want pgsql to do with these invalid inputs?
PG
> At least on a *nix system, collation is based on the value of the LC_ALL
> environment variable at dbinit time. There's nothing you can do about
> it in a live database. IMO that's a little awkward, and is what finally
> made me change the global from ISO-8859-1 to UTF-8 on my three Gentoo
> Linu
On 16/08/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> On 16/08/07, Ben <[EMAIL PROTECTED]> wrote:
> > On Thu, 16 Aug 2007, Phoenix Kiula wrote:
> >
> > > I am not advocating what others should do. But I know what I need my
> > > DB to do. I
On 16/08/07, Ben <[EMAIL PROTECTED]> wrote:
> On Thu, 16 Aug 2007, Phoenix Kiula wrote:
>
> > I am not advocating what others should do. But I know what I need my
> > DB to do. If I want it to store data that does not match puritanical
> > standards of textual stora
<[EMAIL PROTECTED]>
>
> On Wednesday 15. August 2007, Phoenix Kiula wrote:
> >
> >This is great info, thanks. Could you let me know how I could change
> >the global values of "LC_ALL"? I am on Linux too, just CentOS, but I
> >suppose it should be t
What is the fastest way to import the values of *only one* column into
an already existing table? Say the table looks like this:
id (primary key)
description
created_on
I want to import only a new column so the table looks like this:
id (primary key)
title
description
created_on
S
On 16/08/07, Rodrigo De León <[EMAIL PROTECTED]> wrote:
> On Aug 15, 11:46 pm, [EMAIL PROTECTED] ("Phoenix Kiula") wrote:
> > Appreciate any tips, because it would
> > be nasty to have to do this with millions of UPDATE statements!
>
> - Create an interim tabl
On 16/08/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
>
> --- Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>
> > On 16/08/07, Rodrigo De León <[EMAIL PROTECTED]> wrote:
> > > On Aug 15, 11:46 pm, [EMAIL PROTECTED] ("Phoenix Kiula") wrote:
>
Probably an optimistic question - can a user with access to two
databases create a function in one to access tables in the other? Or
triggers?
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On 16/08/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
> Gregory Stark wrote:
>
> >"Rainer Bauer" <[EMAIL PROTECTED]> writes:
> >
> >> Anyway, what Phoenix is trying to say is that 2 queries are required: One
> >> to
> >> get the total count and one to get the tuples for the current page. I
> >> re
I am trying to force a column to have lowercase because Postgresql is
case-sensitive in queries. For the time being I've made an expression
index on lower(KEY). But I would like to have just lower case data and
then drop this expression index.
However, I see some inconsisent behavior from Postgres
> I suspect you're not showing us the exact queries you're running. For
> one, you can't have a table named TABLE (without quotes) in PostgreSQL.
Of course. The data is a tad private, hence the simple table and
column names represented in uppercase.
> Perhaps something else you changed when cha
On 17/08/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> > However, I see some inconsisent behavior from Postgresql. When I issue
> > an UPDATE command , it shows me a duplicate violation (which could be
> > correc
I'm loving the fact that while I am doing some one-time updates to the
DB, users can still SELECT away to glory. This is a major boon in
comparison to my experience with another major opensource database.
However, I am a little frustrated by the amount of time PGSQL takes to
complete tasks. Just t
On 17/08/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
> > However, I am a little frustrated by the amount of time PGSQL takes to
> > complete tasks. Just to accommodate these tasks, my conf file
On 17/08/07, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
> > I have dropped all indexes/indicises on my table, except for the
> > primary key. Still, when I run the query:
> > UPDATE mytable SE
On 17/08/07, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 17, 2007 at 10:22:55PM +0800, Phoenix Kiula wrote:
> > Wow, smartest advice of the day! Yes, a lot of our data in that column
> > has dots and numbers (800,000 compared to 6 million), so I wanted t
I'm noticing that some of my data has been imported as junk text:
For instance:
klciã«"
What would be the SQL to find data of this nature? My column can only
have alphanumeric data, and the only symbols allowed are "-" and "_",
so I tried this regexp query:
select id, t_code
from
On 18/08/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
> [Please reply to the list so that others may benefit from and
> participate in the discussion.]
>
> >> If you're including - in a range as a character, doesn't it have to
> >> go first?
> >> Try this:
> >>
> >> WHERE t_code ~ $re$[^-A-Za-
I am writing some simple batch scripts to login to the DB and do a
pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
to be asked for a password every time (which, for silly corporate
reasons, is quite a convoluted one).
So I read up on .pgpass. Where should this file be located.
On 18/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> > I am writing some simple batch scripts to login to the DB and do a
> > pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
> > to be asked for a password every
On 18/08/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 08/18/07 06:02, Phoenix Kiula wrote:
> [snip]
> >
> > Thanks for this. I am logged in as root. Put it there and it works. I
>
> Well, that's yo
[Sorry for the length of this post. It stretched as I provided as much
info as possible..]
So the rubber meets the road. We've put postgresql in a production
environment with some heavy simultaneous usage. It works well in
general, but often PG doesn't respond. How should I test what is going
wron
On 19/08/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> [Sorry for the length of this post. It stretched as I provided as much
> info as possible..]
>
> So the rubber meets the road. We've put postgresql in a production
> environment with some heavy simultaneous usage.
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
.snipped
> > I can merrily increase the "max_fsm_pages" directive, but the manual
> > also caveats that with "this can use more system V memory than
> > available on
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> There are ways to do this, but if you can't just use timeouts to expire
> from the cache, things can become pretty complicated pretty fast. But
> perhaps you can isolate some kinds of queries
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> > On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> > should we do one (VACUUM FULL) now given that we've overrun our
> > max_fsm_pages?
>
> Yes, but not unti
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
...snip
> There should be a line like this at the end of a "VACUUM VERBOSE" command:
> INFO: free space map contains 33 pages in 74 relations
> DETAIL: A total of 1184 page slots ar
Btw, related to one my earlier questions: where can I see how many
connections are being made to the DB, what was the maximum number
attempted at any given time, and so on? The connections related info.
Thanks!
---(end of broadcast)---
TIP 4: Have y
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> No need to match. If you have 150 relations, 200 is a reasonable value.
> But once you get the proper output from the vacuum command, it tells you
> that as well (74 in my example above)
Found
On 19/08/07, Gavin M. Roy <[EMAIL PROTECTED]> wrote:
> We use PHP, but think of it as a universal PgSQL proxy.. If you connect to
> a connection you setup in pgBouncer via psql, it looks like a normal
> database. Nothing is different in your code but where you connect (for us,
> it's the same as
Hi,
On 23/08/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
> And here are results of built-in Postgres test script:
>
Can you tell me how I can execute this script on my system? Where is
this script?
Thanks!
---(end of broadcast)---
TIP 3: Have
On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
> Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
> for quite some time now. We had it hitting a pg7.2 db back in the
> day, when hip kids road around in rag top roadsters and wore tshirts
> with cigarettes rolled in thei
Hi,
We have big blobs of text (average 10,000 characters) in a database,
from which we would like to discover the most often repeated words or
phrases. Can tsearch be used for this kind of pattern search? I
suppose it's Text Mining 101 sort of stuff, nothing complex.
TIA!
---
On 25/08/07, Oleg Bartunov <[EMAIL PROTECTED]> wrote:
> On Fri, 24 Aug 2007, Phoenix Kiula wrote:
>
> > Hi,
> >
> > We have big blobs of text (average 10,000 characters) in a database,
> > from which we would like to discover the most often repeated words or
We're moving from MySQL to PG, a move I am rather enjoying, but we're
currently running both databases. As we web-enable our financial
services in fifteen countries, I would like to recommend the team that
we move entirely to PG.
In doing research on big installations of the two databases, I read
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file "pg_subtrans/01F8" at offset 221184:
No space left on device.
What is this about and how do I solve this? A "df -h" on my system shows this:
FilesystemTyp
On 31/08/2007, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula írta:
> > I am getting this message when I start the DB:
> >
> >
> > psql: FATAL: could not access status of transaction 0
> > DETAIL: Could not write to file "pg_subtrans/
On 31/08/2007, Josh Tolley <[EMAIL PROTECTED]> wrote:
> On 8/31/07, Zoltan Boszormenyi <[EMAIL PROTECTED]> wrote:
> > Phoenix Kiula írta:
> In addition to what others have already said, when things calm down
> you should consider implementing some sort of monitoring sys
On 01/09/07, Ashish Karalkar <[EMAIL PROTECTED]> wrote:
>
>
> Hello All,
> I want to export data from PostgreSQL tables to MS Excel.
> Is there any way?
Sure, write SQL in a program (php, perl, jsp, asp) to dump the tables
in HTML rows format. Then import that HTML page
program into Excel from
Hello,
I have a simple query as follows. It joins two very straightforward tables.
SELECT
trades.id,
trades.url,
trades.alias,
tradecount.t_count,
tradecount.u_count
FROM trades
LEFT JOIN tradecount ON trades.id = tradecount.id
WHERE trades.user_id = 'jondoe' and trades.status = 'Y'
OR
On 01/09/07, Alban Hertroys <[EMAIL PROTECTED]> wrote:
>
> On Sep 1, 2007, at 11:46, Phoenix Kiula wrote:
.
..snip
> > However, there's a nested loop in there as the EXPLAIN ANALYZE shows
> > below. What is causing this nested loop?
>
> It looks like it
A couple of questions about the "most_common_vals" stuff in pg_stats
for a high traffic table:
1. Can I tell the stats collector to collect only values of a column
where a certain regex is matched? It is currently collecting the 500
values where most of them are values that I don't want, so it's
p
On 03/09/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> most_common_vals will (and should) be empty if there aren't actually any
> common values, but aren't you getting a histogram? Exactly what
> performance do
On 03/09/07, Alban Hertroys <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> As I understand it it's a sample of how the data is distributed.
> Probably it's based on statistical mathematics that specifies a minimum
> size for a representive sample of a given data
On 04/09/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> --- Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> > LOG: duration: 93473.282 ms statement: select t_info, dstats, id
> > from trades where t_alias = '17huv' and status = 'Y'
> >
On 04/09/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> > On 04/09/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> >> Thats odd, I wonder why the EXPLAIN ANALYZE time id some much less that
> >>
I have a varchar ID field which captures a user account. If there is
no user id, then we just store the IP address of the user. Realizing
that we would like to index only those tuples where the user ID is not
an IP, I am trying to create a partial index as follows:
CREATE INDEX idx_trades_tid_part
Hello
We're trying to look for the most optimal config for a heavy duty
production server, and the following two are falling in the same price
range from our supplier:
Option 1:
2 x 300GB SCSI (10k rpm) with SAS and RAID 1
Option 2:
4 x 300GB SATA2 (7200 rpm, server grade) with RAID 10
I am not
On 11/09/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> It depends what you want to do with your database.
>
> Do you have many reads (select) or a lot of writes (update,insert) ?
This one will be a hugely INSERT thing, very low on UPDATEs. The
INSERTS will have many TEXT fields as they are
On 11/09/2007, Tom Lane <[EMAIL PROTECTED]> wrote:
> Richard Huxton <[EMAIL PROTECTED]> writes:
> > The planner isn't smart enough to figure out which queries can use this
> > index by examining them, it just looks for (NOT paid) in the WHERE
> > clause and if it doesn't find it, ignores the index.
Thanks Greg.
> You're not going to get a particularly useful answer here without giving
> some specifics about the two disk controllers you're comparing, how much
> cache they have, and whether they include a battery backup.
>
Scenario 1, SATAII:
- Server: Asus RS120-E4/PA4 Dedicated Server
-
On 12/09/2007, Ron Johnson <[EMAIL PROTECTED]> wrote:
> How (on average) large are the records you need to insert, and how
> evenly spread across the 24 hour day do the inserts occur?
There will be around 15,000 inserts in a day. Each insert will have
several TEXT columns, so it is difficult to p
On 12/09/2007, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Wed, 12 Sep 2007, Phoenix Kiula wrote:
>
> > Scenario 1, SATAII:
> > - Server: Asus RS120-E4/PA4 Dedicated Server
> > - CPU: Single -- Intel Quad Core Xeon Processor x3210 Processor 2.13Ghz
> > - RAM: 4G
On 11/09/2007, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>
> The suggestion in this thread that a regex index will come into play
> only when the WHERE condition specifically mentions it was indeed the
> key for me.
Ok, I've hit a snag about this index. I think it'
On 13/09/2007, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Mike Charnoky wrote:
>
> > Alvaro: The cluster suggestion probably won't help in my case since data
> > in the table should already be naturally ordered by date.
>
> It's not helpful only for reordering, but also for getting rid of dead
> t
On 13/09/2007, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> > Apart from creating a new table, indexing it, then renaming it to
> > original table -- is there an alternative to CLUSTER that doesn't
> > impose
We have a system that came with pg 8.1.9. When I try to uninstall
those RPMs, it works for all the rpms except for libs:
> rpm -ev postgresql-libs-8.1.9-1.el5
error: Failed dependencies:
libpq.so.4 is needed by (installed) apr-util-1.2.7-6.i386
I am not sure what this is about and h
On 14/09/2007, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> We have a system that came with pg 8.1.9. When I try to uninstall
> those RPMs, it works for all the rpms except for libs:
>
>
> > rpm -ev postgresql-libs-8.1.9-1.el5
> error: Failed dependencies:
>
Well first question: how can I check if autovacuum is working?
On 04/09/2007, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> > Basically, what I am missing is some info on actually tweaking the
> > postgresql.conf to suit
The manual is vague. Several threads about this, in language that is
ambiguous to me.
So a YES/NO question:
Q: To get rid of index bloat, is a VACUUM ANALYZE enough? Or must I
reindex/cluster indexes?
Thanks.
---(end of broadcast)---
TIP 3: Have y
> Unfortunately, folks like Phoenix are looking for yes/no answers, and
> with many of these questions, the _correct_ answer is "it depends on
> your workload"
I wanted merely to simplify the advice that gets dispensed on this
list, often conflicting to novice ears like mine. So I appreciate your
On 18/09/2007, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi,
>
> > Can I add SHM with merely by managing the entry in sysctl.conf? My
> > current values:
> >
> > kernel.shmmax = 536870912
> > kernel.shmall = 536870912
> >
> > My "shared_buffers" in postgresql.conf is "2". From the website
>
Thanks for a very informative post! One question:
> I'm not sure how to find the current value, but a smaller fill factor
> on busy tables should lead to less fragmentation, thus more efficient
> indexes over time. Keep in mind that a smaller fill factor will also
> lead to larger indexes initia
On 19/09/2007, Gregory Williamson <[EMAIL PROTECTED]> wrote:
...
> Can't speak directly to PostgreSQL but in Informix the fill factor is
> useful for tweaking indexes. A very high fill factor is useful for tables
> that are static -- any inserts or changes to the index trigger a *lot* of
> movin
On 19/09/2007, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> --- Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>
> > 2. Is this fill factor enough to have on the table, or should I also
> > do a fill factor for specific indexes? Or both the table and the
> > inde
On 20/09/2007, Chester <[EMAIL PROTECTED]> wrote:
> Hi
>
> I have a question regarding foreign keys, I just cannot get it to create
> them for meI must be doing something wrong but I have no idea what
> that might be :)
>
> I have a table "clients"
>
> clientID (primary)
> ticode
> Firstname
>
On 24/09/2007, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> my FSM is way bigger than I ever use (vacuum never reports shortage)
> and I still get bloat that needs to be purged out with a reindex on
> occasion.
Vivek,
I feel your pain. But I seem to have (mostly) solved my problem in three ways:
On 25/09/2007, Vivek Khera <[EMAIL PROTECTED]> wrote:
> Recommending I run vacuum intermixed with the data purge is a non-
> starter; the vacuum on these tables takes a couple of hours. I'd
> never finish purging my data with that kind of delay.
...
> I will investigate the fill-factor. That se
On 25/09/2007, Anoo Sivadasan Pillai <[EMAIL PROTECTED]> wrote:
>
>
>
>
> Hi,
>
> On further testing I found the same behaviour in Unique keys too, The
> following batch can reproduce the behaviour.
>
> CREATE TABLE master ( m1 INT primary key , m2 int unique ) ;
>
> INSERT INTO master VALUES (
Hi,
How can I remove characters that form a part of regular expressions? I
would like to remove all instances of the following characters:
[
]
\
+
Given that these all mean something in regexp, I am trying to prefix
them with a backslash, but it doesn't work. I tried the following:
update T
On 25/09/2007, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> > How can I remove characters that form a part of regular expressions?
>
> Why do you want to do that?
Because these values were inserted into the DB due to a faulty
application. So cleansing was called for.
I just ended up doing it with r
After I clustered the primary key index of a table with about 300,000
rows, my vacuum/analyze on that table is taking too long ... over 15
mins when originally it was 15 seconds! Nothing else has been changed
with this table. Is clustering not good for vacuums?
---(end of b
A vacuum analyze that used to take about 3 minutes on a table of about
4 million rows is now taking up to 25 minutes. I changed the
statistics on two index columns to 100 recently, to improve planner
estimates. Could this have something to do with the lack of speed?
---(end
On 01/10/2007, Martin Marques <[EMAIL PROTECTED]> wrote:
>
> 1) Is MD5's weakness true?
Yes, but not really for using in a password functionality. You are
very unlikely to repeat a password but in any case you will have the
user ID to make it unique.
> 2) Is there any sha1() functions in Postgr
The Amazon Dynamo framework is going to replace RDBMS?
http://www.readwriteweb.com/archives/amazon_dynamo.php
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
Hello,
I have Postgresql from a few years ago. That's 9.0.11.
During the vacuum it's basically crawling to its knees. While googling
for this (it stops at "pg_classes" forever) I see Tom Lane suggested
upgrading.
So now I must. In doing so, can I follow these instructions?
https://www.digitaloce
Thank you John.
> you can upgrade to 9.0.18 painlessly. 9.1 or .2 or .3, not quite so
> painless.
What's the best way to get to 9.0.18, as a start? Is there a simple
single command I can use? I'm on CentOS 6, 64bit.
> have you tried a vacuum full of the whole cluster, with your applicatio
Thanks...comments below.
> assuming you installed 9.0 from the yum.postgresql.com respositories, then,
> `yum update postgresql90-server` and restart the postgresql-9.0 service
> should do nicely.
This worked. Took me to 9.0.17 for some reason. I'm OK with this.
But the "vacuum full" was a
Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
One of my large tables (101 GB on disk, about 1.1 billion rows) used
to take too long to vacuum. Not sure if it's an index corruption
issue. But I tried VACUUM FULL ANALYZE as recommended in another
thread yesterday, which took 5 hour
24.32 30101692 46962204
sda8 2.7744.8820.07 86661146 38754800
sda9267.11 43478.67 4603.61 83952607992 8889065916
On Sun, Aug 3, 2014 at 9:56 AM, John R Pierce wrote:
> On 8/2/2014 6:20 PM, Phoenix Kiula wrote:
>>
>> PS:
1 - 100 of 323 matches
Mail list logo