On 21/01/13 08:04, Tim Uckun wrote:
This is the query I am running
update cars.imports i
set make_id = md.make_id
from cars.models md where i.model_id = md.id;
Here is the analyse
Looks like it's the actual update that's taking all the time.
This query takes fifty seconds on a
Hello,Greetings !I tried with all the below options. It approximatly takes 1
hour 30 minutes for restoring a 9GB database. This much time can not be
affordable as the execution of test cases take only 10% of this whole time and
waiting 1 hour 30 minutes after every test case execution is alot
On Mon, Jan 21, 2013 at 3:01 PM, bhanu udaya udayabhanu1...@hotmail.comwrote:
Hello,
Greetings !
I tried with all the below options. It approximatly takes 1 hour 30
minutes for restoring a 9GB database. This much time can not be affordable
as the execution of test cases take only 10% of
Can you try a couple of things just to check timings. Probably worth EXPLAIN
ANALYSE.
SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id =
md.id;
Takes about 300 ms
CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models md
ON i.model_id = md.id;
On 21/01/13 10:30, Tim Uckun wrote:
Can you try a couple of things just to check timings. Probably worth EXPLAIN
ANALYSE.
SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id =
md.id;
Takes about 300 ms
CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models
bhanu udaya wrote:
I tried with all the below options. It approximatly takes 1 hour 30 minutes
for restoring a 9GB
database. This much time can not be affordable as the execution of test
cases take only 10% of this
whole time and waiting 1 hour 30 minutes after every test case execution is
Hi Bhanu,
Yes, below is the faster approach to follow.
I don't know if that helps, but have you tried creating a template database
and doing DROP DATABASE xxx; CREATE DATABASE xxx TEMPLATE mytemplate;
instead of restoring a dump every time?
Maybe that is faster.
If you are trying to take
On Mon, Jan 21, 2013 at 3:39 AM, Albe Laurenz laurenz.a...@wien.gv.atwrote:
bhanu udaya wrote:
I tried with all the below options. It approximatly takes 1 hour 30
minutes for restoring a 9GB
database. This much time can not be affordable as the execution of test
cases take only 10% of
As I mentioned in my original post, I don't want to use citext or lower().
I tested on Windows, but as I mentioned in one of my first posts, collation
and case sensitivity are separate things.
With this, we are back at the beginning of the circle, so I'll leave it
there.
Maybe I'll check back in
Marcel van Pinxteren, 21.01.2013 13:22:
As I mentioned in my original post, I don't want to use citext or lower().
Why not for the unique index/constraint?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
I'm wondering why varchar_opts is not default operator class for all
indexed varchar field.
Is the impact to heavy?
Thanks for the clarification,
Edson
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On 01/20/2013 11:17 PM, bhanu udaya wrote:
I am trying to restore 9.5G database (1GB dumpfile) which has 500
schemas with 1 lakh rows in each schema. Could take the data dump using
pg_dump and it takes around 40 minutes. I tried to use pg_restore to
restore this dump, but it takes hours to
To make my question more concrete:
if I'd like to round-robin 6 PostgreSQL connections
from my Perl script - how should I change my code:
eval {
my $dbh = DBI-connect_cached(DSN, DBUSER, DBPASS, {
AutoCommit = 1,
PrintWarn = 1,
I would like to add a private key to make
my dbh's different throughout my script:
eval {
my $dbh = DBI-connect_cached(DSN, DBUSER, DBPASS, {
AutoCommit = 1,
MY_PRIVATE_KEY = __FILE__.__LINE__, ### -- HERE
Hello,Thanks alot for all your replies. I tried all settings suggested, it did
not work. pg_restore is very slow. It does not come out less than 1 1/2 hour.
Can you please let me know the procedure for Template. Will it restore the data
also . Please update. I need the database (metadata +
bhanu udaya wrote:
Can you please let me know the procedure for Template. Will it restore the
data also
.
It will create a complete copy of an existing database
The procedure is
CREATE DATABASE newdb TEMPLATE olddb;
Nobody may be connected to olddb for this to work.
Yours,
Laurenz Albe
On 21 January 2013 16:10, bhanu udaya udayabhanu1...@hotmail.com wrote:
Can you please let me know the procedure for Template.
As they say, Google is your friend.
The basic principle is this: You create a read-only (template) version of
your sample database and use that as a template for the
What is the behavior if a column data type is timestamptz but there is
only the date portion available? There must be a default time; can that be
defined?
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On 01/21/2013 07:26 AM, Rich Shepard wrote:
What is the behavior if a column data type is timestamptz but there is
only the date portion available? There must be a default time; can that be
defined?
Easy enough to test:
test=# create table ts_test(ts_fld timestamp with time zone);
CREATE
Tom Lane wrote:
I tested not only with string literals, but also comparing
table columns of the respective types.
I came up with the following table of semantics used for
comparisons:
| CHAR(n)=CHAR(n) | VARCHAR(n)=VARCHAR(n) | CHAR(n)=VARCHAR(n) |
On Mon, 21 Jan 2013, Adrian Klaver wrote:
Easy enough to test:
Thanks again, Adrian.
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Monday, January 21, 2013 08:40:05 PM bhanu udaya wrote:
Hello,Thanks alot for all your replies. I tried all settings suggested, it
did not work. pg_restore is very slow. It does not come out less than 1 1/2
hour. Can you please let me know the procedure for Template. Will it
restore the
Alejandro Carrillo wrote:
this function didn't work to know if a row can surely dead?
http://doxygen.postgresql.org/tqual_8c_source.html#l01236
Sure, as long as you call it after HeapTupleSatisfiesMVCC(), as the
comment specifies. Also note that not all deleted or updated tuples
will be
Hello,Thanks all for the great help. Template is very good option so far seen.
It comes little quickly than pg_restore. But, this also takes 40 minutes time.
I am using Windows with 4GB Ram. Thanks and RegardsRadha Krishna
From: laurenz.a...@wien.gv.at
To: udayabhanu1...@hotmail.com;
On Mon, Jan 21, 2013 at 5:22 AM, Marcel van Pinxteren
marcel.van.pinxte...@gmail.com wrote:
As I mentioned in my original post, I don't want to use citext or lower().
I tested on Windows, but as I mentioned in one of my first posts, collation
and case sensitivity are separate things.
Wait, is
To be honest, the reason I don't want to use citext and lower(), is me
being lazy. If I have to use these features, there is more work for me
converting from SQL Server to Postgresql. I have to make more changes to my
database, and more to my software.
But, developers are generally lazy, so you
On 21 January 2013 17:25, Marcel van Pinxteren
marcel.van.pinxte...@gmail.com wrote:
The other reason, is that I assume that lower() adds overhead, so makes
things slower than they need to be.
Whether that is true, and if that is a compelling reason, I don't know.
Case insensitive collation
Hello All,
Can we achieve this template or pg_Restore in less than 20 minutes time. Any
more considerations. Kindly reply. Thanks and RegardsRadha Krishna From:
udayabhanu1...@hotmail.com
To: laurenz.a...@wien.gv.at; chris.trav...@gmail.com
CC: mag...@hagander.net; franc...@teksol.info;
On 01/21/2013 08:46 AM, bhanu udaya wrote:
Hello All,
Can we achieve this template or pg_Restore in less than 20 minutes time.
Any more considerations. Kindly reply.
Seems to me this is where Point in Time Recovery(PITR) might be helpful.
Richard Huxton wrote:
The only differences I can think of are WAL logging (transaction
log) and index updates (the temp table has no indexes).
What about foreign keys? Are there any tables which reference the
updated column in a foreign key declaration? Do they have indexes
on that column?
Edson Richter edsonrich...@hotmail.com writes:
I'm wondering why varchar_opts is not default operator class for all
indexed varchar field.
varchar has no operators of its own; it just relies on the operators for
type text. Therefore text_ops is the formally correct choice. The
varchar_ops
Em 21/01/2013 17:18, Tom Lane escreveu:
Edson Richter edsonrich...@hotmail.com writes:
I'm wondering why varchar_opts is not default operator class for all
indexed varchar field.
varchar has no operators of its own; it just relies on the operators for
type text. Therefore text_ops is the
Adrian Klaver adrian.kla...@gmail.com writes:
On 01/21/2013 07:26 AM, Rich Shepard wrote:
What is the behavior if a column data type is timestamptz but there is
only the date portion available? There must be a default time; can that be
defined?
Easy enough to test:
test=# create table
On Mon, 21 Jan 2013, Tom Lane wrote:
Note that that default is local midnight according to your current
timezone setting (from which we may guess that Adrian lives on the US west
coast, or somewhere in that general longitude).
Yep. About 3 hours north of me.
Not sure you can change the
On 01/21/2013 11:27 AM, Tom Lane wrote:
Adrian Klaver adrian.kla...@gmail.com writes:
On 01/21/2013 07:26 AM, Rich Shepard wrote:
What is the behavior if a column data type is timestamptz but there is
only the date portion available? There must be a default time; can that be
defined?
Easy
On 01/21/2013 11:27 AM, Tom Lane wrote:
Adrian Klaver adrian.kla...@gmail.com writes:
On 01/21/2013 07:26 AM, Rich Shepard wrote:
What is the behavior if a column data type is timestamptz but there is
only the date portion available? There must be a default time; can that be
defined?
Easy
Edson Richter edsonrich...@hotmail.com writes:
I see. So, what is the overhead of having text_ops in opclass?
Can I define it as default for all my indexes when textual type of any kind?
Why are you intent on defining anything? IMO, best practice is to let
the database choose the opclass,
Just to close this up and give some guidance to future googlers...
There are no foreign keys. The table is empty when I start. I copy the
data into it from a text file.
Removing indexes didn't help much (made a very slight difference).
running a query CREATE TEMP TABLE tt AS SELECT using a
Em 21/01/2013 18:03, Tom Lane escreveu:
Edson Richter edsonrich...@hotmail.com writes:
I see. So, what is the overhead of having text_ops in opclass?
Can I define it as default for all my indexes when textual type of any kind?
Why are you intent on defining anything? IMO, best practice is to
Edson Richter edsonrich...@hotmail.com writes:
Thanks, but I've found that some queries using LIKE operator uses table
scan instead index unless it is defined with varchar_ops in the index...
You mean varchar_pattern_ops? That's an entirely different animal.
regards,
On 21/01/13 20:09, Tim Uckun wrote:
Just to close this up and give some guidance to future googlers...
Careful, future googlers.
Conclusion. Updates on postgres are slow
Nope.
(given the default
postgresql.conf). I presume this is due to MVCC or the WAL or
something and there are probably
Adrian Klaver wrote:
On 01/21/2013 08:46 AM, bhanu udaya wrote:
Can we achieve this template or pg_Restore in less than 20
minutes time.
Seems to me this is where Point in Time Recovery(PITR) might be
helpful.
Maybe, if the source is on a different set of drives, to reduce
contention for
On Mon, Jan 21, 2013 at 1:23 PM, Richard Huxton d...@archonet.com wrote:
On 21/01/13 20:09, Tim Uckun wrote:
Just to close this up and give some guidance to future googlers...
Careful, future googlers.
Conclusion. Updates on postgres are slow
Nope.
(given the default
Nope.
If you have any suggestions I am all ears. For the purposes of this
discussion we can narrow down the problem this update statement.
Update imports set make_id = null.
There are 98K records in the table. There is no index on the make_id
field. Standard untouched postgresql.conf from the
On Mon, Jan 21, 2013 at 9:25 AM, Marcel van Pinxteren
marcel.van.pinxte...@gmail.com wrote:
To be honest, the reason I don't want to use citext and lower(), is me being
lazy. If I have to use these features, there is more work for me converting
from SQL Server to Postgresql. I have to make more
I'd be curious to see results of the same update on a standard HDD
vs the SSD, and maybe on a more typical database deployment hardware
vs a macbook air.
I haven't tried it on any other machine yet. CREATE TEMP TABLE tt as
SELECT ... takes eight seconds so presumably the disk is not the
Richard Huxton wrote:
On 21/01/13 20:09, Tim Uckun wrote:
Just to close this up and give some guidance to future
googlers...
Careful, future googlers.
+1
Conclusion. Updates on postgres are slow
Nope.
Agreed.
(given the default postgresql.conf). I presume this is due to
MVCC or the
Tim Uckun wrote:
If you have any suggestions I am all ears. For the purposes of this
discussion we can narrow down the problem this update statement.
Update imports set make_id = null.
Well, that simplifies things.
First off, what does it say for rows affected? (Hint, if you really
are
On Mon, Jan 21, 2013 at 1:48 PM, Tim Uckun timuc...@gmail.com wrote:
I'd be curious to see results of the same update on a standard HDD
vs the SSD, and maybe on a more typical database deployment hardware
vs a macbook air.
I haven't tried it on any other machine yet. CREATE TEMP TABLE tt
Scott Marlowe wrote:
Honestly as a lazy DBA I have to say it'd be pretty easy to write a
script to convert any unique text index into a unique text index with
a upper() in it. As another poster added, collation ain't free
either. I'd say you should test it to see. My experience tells me
that
Kevin Grittner wrote:
First off, what does it say for rows affected? (Hint, if you really
are using a default configuration and it doesn't say 0 rows
affected, please show us the actual query used.)
Never mind that bit -- I got myself confused. Sorry for the noise.
-Kevin
--
Sent via
Em 21/01/2013 18:36, Tom Lane escreveu:
Edson Richter edsonrich...@hotmail.com writes:
Thanks, but I've found that some queries using LIKE operator uses table
scan instead index unless it is defined with varchar_ops in the index...
You mean varchar_pattern_ops? That's an entirely different
Marcel van Pinxteren wrote on 21.01.2013 17:25:
The other reason, is that I assume that lower() adds overhead
It won't add any noticeable overhead for the unique index.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Mon, Jan 21, 2013 at 1:45 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Jan 21, 2013 at 9:25 AM, Marcel van Pinxteren
marcel.van.pinxte...@gmail.com wrote:
To be honest, the reason I don't want to use citext and lower(), is me being
lazy. If I have to use these features, there is
On Monday, January 21, 2013 at 06:53, Adrian Klaver wrote:
On 01/21/2013 11:27 AM, Tom Lane wrote:
Note that that default is local midnight according to your current
timezone setting (from which we may guess that Adrian lives on the US
west coast, or somewhere in that general longitude).
First off, what does it say for rows affected? (Hint, if you really
are using a default configuration and it doesn't say 0 rows
affected, please show us the actual query used.)
update imports set make_id = null
Query returned successfully: 98834 rows affected, 49673 ms execution time.
Oh I forgot
SELECT version();
PostgreSQL 9.2.2 on x86_64-apple-darwin12.2.1, compiled by Apple
clang version 4.1 (tags/Apple/clang-421.11.65) (based on LLVM 3.1svn),
64-bit
SELECT name, current_setting(name), source
FROM pg_settings
WHERE source NOT IN ('default', 'override');
On 01/21/2013 02:48 PM, Gavan Schneider wrote:
On Monday, January 21, 2013 at 06:53, Adrian Klaver wrote:
On 01/21/2013 11:27 AM, Tom Lane wrote:
Note that that default is local midnight according to your current
timezone setting (from which we may guess that Adrian lives on the US
west
On 01/21/2013 03:45 PM, Tim Uckun wrote:
Oh I forgot
...
shared_buffers;1600kB;configuration file
You *reduced* shared buffers to 1.6MB? IIRC the typical default is 32MB
and the most common adjustment is to *increase* shared buffers. Most of
my servers are set to 2GB.
Try bumping that up
On Tuesday, January 22, 2013 at 09:48, I wrote:
(and I did report it as a bug back then)
Didn't pick this up on my pre-post re-read bug report was
_NOT_ against PostgreSQL. It was some very early incarnations of
OSX iCal, etc. which showed this behaviour.
Apologies for the
On 01/21/2013 04:15 PM, Steve Crawford wrote:
On 01/21/2013 03:45 PM, Tim Uckun wrote:
Oh I forgot
...
Me, too. I forgot to ask for the table definition. If there are
variable-length fields like text or varchar, what is the typical
size of the data.
Also, what is the physical size of the
On 01/21/2013 03:53 PM, Steve Crawford wrote:
On 01/21/2013 02:48 PM, Gavan Schneider wrote:
On Monday, January 21, 2013 at 06:53, Adrian Klaver wrote:
On 01/21/2013 11:27 AM, Tom Lane wrote:
Note that that default is local midnight according to your current
timezone setting (from which
On 01/21/2013 03:53 PM, Steve Crawford wrote:
On 01/21/2013 02:48 PM, Gavan Schneider wrote:
On Monday, January 21, 2013 at 06:53, Adrian Klaver wrote:
On 01/21/2013 11:27 AM, Tom Lane wrote:
Note that that default is local midnight according to your current
timezone setting (from which
I already posted the schema earlier. It's a handful of integer fields
with one hstore field.
On Tue, Jan 22, 2013 at 1:23 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
select * from pg_stat_user_tables where relname='yourtable';
Messy output
Adrian Klaver wrote:
I see where my confusion lies. There are two proposals at work in the above:
Taking another tangent I would much prefer the default time to be
12:00:00 for the conversion of a date to timestamp(+/-timezone)
Propose: '2013-12-25'::timestamp == 2013-12-25 12:00:00
On 01/21/2013 05:06 PM, Kevin Grittner wrote:
Adrian Klaver wrote:
I see where my confusion lies. There are two proposals at work in the above:
Taking another tangent I would much prefer the default time to be
12:00:00 for the conversion of a date to timestamp(+/-timezone)
Propose:
Kevin Grittner wrote:
update imports set make_id = 0
Query returned successfully: 98834 rows affected, 45860 ms execution time.
For difficult problems, there is nothing like a self-contained test
case, that someone else can run to see the issue. Here's a starting
point:
create extension if
Adrian Klaver wrote:
If I was following Gavan correctly, he wanted to have a single
timestamp field to store calender dates and datetimes. In other
words to cover both date only situations like birthdays and
datetime situations like an appointment.
If that is actually true, it sounds like
unsubscribe pgsql-general
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I have an application that creates temp tables to speed up the fetching of
the data
Everything was working fine until a couple of days ago, the database is
starting to display the following error more and more
cache lookup failed for relation 1852615815
I noticed that this error
On Monday, January 21, 2013 at 12:06, Kevin Grittner wrote:
Adrian Klaver wrote: [Actually Gavan Schneider wrote this, don't blame Adrian :]
I see where my confusion lies. There are two proposals at work in the above:
Taking another tangent I would much prefer the default time
to be
Pascal Tufenkji ptufen...@usj.edu.lb writes:
Everything was working fine until a couple of days ago, the database is
starting to display the following error more and more
cache lookup failed for relation 1852615815
Hm, what PG version is that? Is 1852615815 anywhere near the range of
existing
On 01/21/2013 07:40 PM, Gavan Schneider wrote:
On Monday, January 21, 2013 at 12:06, Kevin Grittner wrote:
Well, the big problem here is in trying to use either version of
timestamp when what you really want is a date. It will be much
easier to get the right semantics if you use the date
Adrian Klaver adrian.kla...@gmail.com writes:
If I have learned anything about dealing with dates and times, is that
it is a set of exceptions bound together by a few rules. Every time you
think you have the little rascals cornered, one gets away.
Yeah, that's for sure. Anyway, I think we
On Monday, January 21, 2013 at 10:53, Steve Crawford wrote:
On 01/21/2013 02:48 PM, Gavan Schneider wrote:
Taking another tangent I would much prefer the default time to
be 12:00:00 for the conversion of a date to timestamp(+/-timezone).
Propose: '2013-12-25'::timestamp == 2013-12-25
On Monday, January 21, 2013 at 15:33, Tom Lane wrote:
I think it is also arguably contrary to the SQL standard...
17) If TD is the datetime data type TIMESTAMP WITHOUT TIME ZONE,
then let TSP be the timestamp precision of TD.
b) If SD is a date, then the primary datetime fields hour,
minute,
On Monday, January 21, 2013, Tim Uckun wrote:
First off, what does it say for rows affected? (Hint, if you really
are using a default configuration and it doesn't say 0 rows
affected, please show us the actual query used.)
update imports set make_id = null
Query returned successfully:
On Monday, January 21, 2013, Tim Uckun wrote:
I already posted the schema earlier. It's a handful of integer fields
with one hstore field.
one hstore field can easily be equivalent to 50 text fields with an index
on each one.
I'm pretty sure that that is your bottleneck.
what does \di+
Monday, January 21, 2013, 8:56:38 PM, you wrote:
Except for days that are 23-hours long, or 25, or other (it's a big
world with all sorts of timezone rules).
The day's length may change but I don't believe there is
anywhere that allows for the local time of day to equal or be
greater than
79 matches
Mail list logo