Dear Scott,
When I pg_dump -t bigtablename on the Slony slave, the dump file is
212G in size.
I am unable to perform the same test on the master until I get a
maintenance window, which may not be for a few weeks, as it does
impact our production system when we dump from the master (the web
Hai Aleksey,
I once have the same problem. In my case it's because most of my table
using text datatype.
When I change the field type to character varying (1000) database size
reduced significantly
Unfortunately, I haven't investigate more, but it looks like how postgres
stores data
Regards,
On 03/12/12 12:06 AM, Nur Hidayat wrote:
I once have the same problem. In my case it's because most of my table
using text datatype.
When I change the field type to character varying (1000) database size
reduced significantly
Unfortunately, I haven't investigate more, but it looks like how
Yes, I am aware of that, but that's the fact I'm facing
Right now I'am happy enough my system runs well without eating up my drive :)
I'll investigate more later when time available :)
Cheers,
Nur Hidayat
.::.
Sent from my BlackBerry®
powered by The ESQ Way 165
-Original Message-
FYI, after I changed text field into character varying, I vaccuum the whole
database, resulting in much smaller database size
Cheers,
NH
.::.
Sent from my BlackBerry®
powered by The ESQ Way 165
-Original Message-
From: Nur Hidayat hidayat...@gmail.com
Date: Mon, 12 Mar 2012 08:18:09
Selena Deckelmann wrote:
On Thursday, March 8, 2012 at 11:40 AM, Stefan Keller wrote:
I do have a student who is interested in participating at the Google
Summer of Code (GSoC) 2012
Now I have the burden to look for a cool project... Any ideas?
Also those who are on this thread, we are
Tom,
We made most of our text, varchar columns citext data types so that we
could do case insensitive searches. Is this going to negate most of the
index searches? It appeared to our DBA that it would be easier to use
citext data type then need to use ILIKE instead?
Michael Gould
Intermodal
On Sat, 2012-03-10 at 14:08 -0800, PgSQL wrote:
If the OP can identify and download the relevant .rpm file
Have you checked the CentOS bugzilla to see if the yum failure has been
reported? You should raise a bug report if it isn't there: if nobody
reports a bug its unlikely to get fixed.
I
Thanks Martin.
Sure!, I download this packages to same folder, but I used:
rpm -Uvh *.rpm
to install all this packages.
PD:
mi ingles es pobre, mi lenguaje es español.
Gracias
2012/3/12, Martin Gregorie-2 [via PostgreSQL]
ml-node+s1045698n5557665...@n5.nabble.com:
On Sat, 2012-03-10 at
On 12 March 2012 09:20, Nur Hidayat hidayat...@gmail.com wrote:
FYI, after I changed text field into character varying, I vaccuum the whole
database, resulting in much smaller database size
What I think that happened in your case is that because of the
data-type change every row in the table
Where can I check in which version the default setting for fsync was
changed?
thanks
--
Ivan Sergio Borgonovo
http://www.webthatworks.it
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Hi,
writing a client frontend I started with mysql and migrated to postgres. Now I
found out that there is a difference in the way joined queries are returned:
The query
SELECT
Alexander Reichstadt l...@mac.com writes:
Is there a way or some setting on postgres server to tell postgres to use the
fieldnames exactly as provided in the select?
You didn't say exactly which API you're using, but in libpq you could
look at the PQftable and PQftablecol column property
PGSQLKit for Cocoa.
Am 12.03.2012 um 18:09 schrieb Tom Lane:
Alexander Reichstadt l...@mac.com writes:
Is there a way or some setting on postgres server to tell postgres to use
the fieldnames exactly as provided in the select?
You didn't say exactly which API you're using, but in libpq
Hello All,
I've looked through the docs, but I'm unable to find complete answers to my
questions, so thanks in advance if you can lend any expertise.
Here's the situation I'm in (always a good opener, right? :) ):
We've got a postgres database with *a lot* of data in one table. On the
order of
Hi,
the following statement worked on mysql but gives me an error on postgres:
column addresses.address1 must appear in the GROUP BY clause or be used in an
aggregate function
I guess I am doing something wrong. I read the web answers, but none of them
seem to meet my needs:
SELECT
We made most of our text, varchar columns citext data types so that we
could do case insensitive searches. Is this going to negate most of the
index searches? It appeared to our DBA that it would be easier to use
citext data type then need to use ILIKE instead?
In the same vein...
Does
However, given the size of this table, I have no idea how long something
like this might take. In general I've had a tough time getting feedback
from postgres on the progress of a query, how long something might take,
etc.
You can always do this which would result in minimum hassles.
Tim,
It is my understanding that since the extention citext is available that
this gives you what your asking for and at least at this point isn't
going to be part of the core.
Michael Gould
Intermodal Software Solutions, LLC
904-226-0978
Original Message
Subject: Re:
Hi,
You can use one of windowing function:
http://www.postgresql.org/docs/9.1/static/tutorial-window.html
http://www.postgresql.org/docs/9.1/static/functions-window.html
this could be rank() in subquery or first_value(vale any), but there could
be performance issue
another solution could be
So the mysql way for group by seems to be non-standard.
What works for postgres is the DISTINCT ON (fieldname) approach.
Thanks
Am 12.03.2012 um 20:35 schrieb Alexander Reichstadt:
Hi,
the following statement worked on mysql but gives me an error on postgres:
column addresses.address1
Instead of the joins you can use a subquery to get the first address.
Or you can do the joins without the group by and use row_number()
over(partition by companies.id) on the select list to label each company
address with a number starting at 1. You can just keep rows that have
row_number =
Thanks, I just posted my response to my own question for the archives. I take
it also that group by is faster than distinct on. If it is a substantial
performance gain I have to work on this some more. A subquery I would expect
would be much of a drag, so for all keystroke-updated list-tables
On Mon, Mar 12, 2012 at 1:35 PM, Alexander Reichstadt l...@mac.com wrote:
Hi,
the following statement worked on mysql but gives me an error on postgres:
column addresses.address1 must appear in the GROUP BY clause or be used in
an aggregate function
I guess I am doing something wrong. I
It is my understanding that since the extention citext is available that
this gives you what your asking for and at least at this point isn't
going to be part of the core.
For me it's more of a workaround than a solution but yes probably good
enough. Collation is more subtle than case
2012/3/12 Tim Uckun timuc...@gmail.com:
It is my understanding that since the extention citext is available that
this gives you what your asking for and at least at this point isn't
going to be part of the core.
For me it's more of a workaround than a solution but yes probably good
enough.
On 03/12/12 1:25 PM, Tim Uckun wrote:
create a new bigint field.
copy all the IDs to it.
index it in the background
at frequency of your choosing sync the id field to the new field to keep it up.
at a time of your choosing set the default for the new field to be
serial starting at max(id)
drop
I guess I lack the knowledge to integrate your answer in my queryActually
I'd prefer to always see the first address entered unless there is a
where-clause added. Not sure how this works out then and haven't tested. But
given the initial query extended by distinct on it would be like so:
If I didn't change the data type to charcater varying vaccuming the database
doesn't reduce the database size
.::.
Sent from my BlackBerry®
powered by The ESQ Way 165
-Original Message-
From: Alban Hertroys haram...@gmail.com
Date: Mon, 12 Mar 2012 16:43:49
To: hidayat...@gmail.com
On Mon, Mar 12, 2012 at 3:19 PM, Alexander Reichstadt l...@mac.com wrote:
But where would I insert the max(address) piece?
Just put max() or min() around any field in the select list that's not
in the group by clause
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
On 03/12/12 2:28 PM, Nur Hidayat wrote:
If I didn't change the data type to charcater varying vaccuming the database
doesn't reduce the database size
changing the data type required every tuple to get rewritten. a
vacuum full, or a cluster likely would have done the same or better
I have a table that is around 20 GB, so I need to optimize as best as possible
the matching with another table on keywords across multiple fields. I have
around 10 fields that have keywords or phrases delimited with the or
operator |. So it would be in the form of a | b | and jack | cd .
Tim,
Commando. I like it. Thanks a ton for that suggestion. I'd still like to
hear if anyone has a good way to estimate the performance of these
operations, but I'll explore what it would mean to do exactly that.
John: thankfully this is a table without any fks in, although it is indexed
to
Hi all
2011/7/12 Chris Travers chris.trav...@gmail.com:
I am not convinced that VoltDB is a magic bullet either. I don't
I have the chance to help preparing an interview with Mike
Stonebreaker to be published at www.odbms.org
I'd really like to know, if he is up-to-date how Postgres performs
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jim Ostler
Sent: Monday, March 12, 2012 6:57 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Matching on keyword or phrases within a field that is
delimited with an or operator |
I have
trying to update a varchar numeric string column
by converting it to int, adding a numeric value and insert it back
as a varchar
Having trouble with cast
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On 03/12/2012 06:28 PM, Bret Stern wrote:
trying to update a varchar numeric string column
by converting it to int, adding a numeric value and insert it back
as a varchar
Having trouble with cast
Possibly having trouble with two casts: one from string to int, one
from int to string? You
On Mon, 2012-03-12 at 15:57 -0700, Jim Ostler wrote:
I have a table that is around 20 GB, so I need to optimize as best as
possible the matching with another table on keywords across multiple
fields. I have around 10 fields that have keywords or phrases
delimited with the or operator |. So it
Bret Stern wrote:
trying to update a varchar numeric string column
by converting it to int, adding a numeric value and insert it back
as a varchar
Having trouble with cast
I assume you are doing an update as opposed to an insert. You use
both above (and both numeric and int as well).
On Mon, 2012-03-12 at 17:39 -0700, Bosco Rama wrote:
Bret Stern wrote:
trying to update a varchar numeric string column
by converting it to int, adding a numeric value and insert it back
as a varchar
Having trouble with cast
I assume you are doing an update as opposed to an
On 03/12/12 5:01 PM, Carson Gross wrote:
We are also considering sharding the table and maybe the right thing
is to simply fix it when we do the sharding.
postgres generally calls that partitioning... Sharding usually means
splitting data across multiple servers.
--
john r pierce
Hi all,
When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
what happens with indices? Are they updated only once after the operation, or
are they updated once per row? Note that I'm not replacing the table's data:
I'm appending to what's already there. I suspect
Hi,
On 13 March 2012 15:11, François Beausoleil franc...@teksol.info wrote:
When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
what happens with indices? Are they updated only once after the operation, or
are they updated once per row? Note that I'm not replacing
2012/3/12 François Beausoleil franc...@teksol.info:
Hi all,
When using COPY FROM STDIN to stream thousands of rows (20k and more hourly),
what happens with indices? Are they updated only once after the operation, or
are they updated once per row? Note that I'm not replacing the table's
44 matches
Mail list logo