On Sun, Jun 13, 2010 at 17:58, Tom Lane t...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
On Sun, Jun 13, 2010 at 17:42, Tom Lane t...@sss.pgh.pa.us wrote:
... (We presumably want
timezone to default to the system timezone setting, but I wonder how
we should make that work
On Sun, Jun 13, 2010 at 21:19, David Jarvis thanga...@gmail.com wrote:
Hi,
It's not immediately obvious what the default value of timezone
will be?
The system's locale, like now(); documentation can clarify.
By named parameter, I meant default value. You could construct a timestamp
Can any one please help me in tuning the query?
explain
select *
from (select * from crmentity where deleted = 0 and createdtime between
(now() - interval '6 month') and now() ) as crmentity
inner join (select * from activity where activitytype = 'Emails' and
date_start between (now() - interval
Does it make sense to allow minutes when hours isn't specified? Or
For time, 00 seems a reasonable default for all values; clearly document the
defaults. Also, having a default makes the code simpler than level n plus
level n-1. (Not to mention explaining it.) ;-)
SELECT to_timestamp( minutes
Hi all,
I have 2 data bases trying to perform an update query at the same time
on a same table in a third data base using db link.
I'm getting a dead lock exception:
ERROR: deadlock detected
DETAIL: Process 27305 waits for ShareLock on transaction 55575; blocked
by process 27304.
Process
On 14/06/10 12:50, Elias Ghanem wrote:
SELECT INTO DUMMY DBLINK_EXEC(''CONNECTION_STRING TO DB3',
''UPDATE IN_FICHE_PRODUIT SET VALIDE = 1'');
If there's more than one value in that table, an explicit ORDER BY might
help (otherwise you could get the situation where query A will update
On 06/14/2010 05:41 AM, AI Rumman wrote:
Can any one please help me in tuning the query?
explain
select *
from (select * from crmentity where deleted = 0 and createdtime between
(now() - interval '6 month') and now() ) as crmentity
inner join (select * from activity where activitytype =
Magnus Hagander mag...@hagander.net writes:
On Sun, Jun 13, 2010 at 21:19, David Jarvis thanga...@gmail.com wrote:
I prefer to_timestamp and to_date over the more verbose construct_timestamp.
Yeah, I agree with that.
Those names are already taken. It will cause confusion (of both people
and
On 06/14/2010 06:50 AM, Elias Ghanem wrote:
Hi all,
I have 2 data bases trying to perform an update query at the same time
on a same table in a third data base using db link.
I'm getting a dead lock exception:
ERROR: deadlock detected
DETAIL: Process 27305 waits for ShareLock on transaction
AI Rumman rumman...@gmail.com wrote:
[It takes a long time to return 2.6 million rows.]
Any more idea, please.
I don't recall you telling us exactly what the environment and
connection type is in which you're trying to return this large
result set. Any specific suggestions would depend on
Hi,
Actually i guess the problem is related to the way PG uses to aquire
lock on the rows that will be updated.
Suppose the update query will affect 5 rows: A, B, C, D and E.
Apparently the folowing senario is happening:
1- Transaction1 locks row A
2- Trnasaction2 locks row B
3-
It's a standard (indeed, required) best practice of concurrent database
programming across any brand of database to ensure that multi-row
transactions always acquire the locks they use in a predictable order based
on row identities, e.g. for the classic banking debit-credit pair, doing
something
I am facing sar related issues on Redhat Enterprise Linux64 5.4 too (60G Ram,
No Swap space, Xeon Processor).
sar -o /var/dbt2_data/PG/Output/driver/dbt2-sys1/sar_raw.out 60 204
|___ sadc 60 205 -z /var/dbt2_data/PG/Output/driver/dbt2-sys1/sar_raw.out
It generates following sar data i.e.
On Thu, Jun 10, 2010 at 5:36 AM, AI Rumman rumman...@gmail.com wrote:
I found only AccessShareLock in pg_locks during the query.
And the query does not return data though I have been waiting for 10 mins.
Do you have any idea ?
I have queries that run for hours. As long as it's using CPU / IO
On Mon, Jun 14, 2010 at 11:58 AM, Dave Crooke dcro...@gmail.com wrote:
If you're doing straight SQL bulk updates, then as someone suggested, you
could use an ORDER BY on a subquery, but I don't know if that is a guarantee,
if you're not actually displaying the results then the DB may be
Hi Bob,
Thanks a lot. Here's my best attempt to answer your questions:
The VM is setup with a virtual disk image dynamically expanding to fill
an allocation of 300GB on a fast, local hard drive (avg read speed =
778MB/s ).
WAL files can have their own disk, but how significantly would this
Alvaro Herrera wrote:
Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:
Yes, the folks at commandprompt need to be told about this. Loudly.
It's a serious packaging error.
Just notified Lacey, the packager (not so loudly, though); she's working
on new packages, and
On Mon, Jun 14, 2010 at 2:53 PM, Tom Wilcox hungry...@gmail.com wrote:
maintenance_work_mem=4GB
work_mem=4GB
shared_buffers=4GB
effective_cache_size=4GB
wal_buffers=1GB
It's pretty easy to drive your system into swap with such a large
value for work_mem - you'd better monitor that carefully.
Tom
I always prefer to choose apps based on business needs, then the OS based on
the needs for the app.
Cynically, I often feel that the best answer to we have a policy that says
we're only allowed to use operating system x is to ignore the policy
the kind of people ignorant enough to be
Hi Dave,
I am definitely able to switch OS if it will get the most out of
Postgres. So it is definitely a case of choosing the OS on the needs if
the app providing it is well justified.
Currently, we are running Ubuntu Server 64-bit in a VirtualBox VM.
Cheers,
Tom
Dave Crooke wrote:
Tom
Thanks a lot Dave,
That's exactly the kind of answer I can use to justify the OS switch.
Motivation for the previous setup was based on the fact that we will use
the same machine for other projects that will use SQL Server and most of
our experience lies within the MS domain. However, these
Tom Wilcox wrote:
default_statistics_target=1
wal_buffers=1GB
max_connections=3
effective_cache_size=15GB
maintenance_work_mem=5GB
shared_buffers=7000MB
work_mem=5GB
That value for default_statistics_target means that every single query
you ever run will take a seriously long time to
Just curious if this would apply to PostgreSQL:
http://queue.acm.org/detail.cfm?id=1814327
http://queue.acm.org/detail.cfm?id=1814327Now that I've read it, it seems
like a no-brainer. So, how does PostgreSQL deal with the different latencies
involved in accessing data on disk for searches / sorts
On Jun 14, 2010, at 11:53 AM, Tom Wilcox wrote:
max_connections=3
effective_cache_size=15GB
maintenance_work_mem=5GB
shared_buffers=7000MB
work_mem=5GB
maintenance_work_mem doesn't need to be so high, it certainly has no effect on
your queries below. It would affect vacuum,
On Jun 14, 2010, at 7:06 PM, Greg Smith wrote:
I really cannot imagine taking a system as powerful as you're using here
and crippling it by running through a VM. You should be running Ubuntu
directly on the hardware, ext3 filesystem without LVM, split off RAID-1
drive pairs dedicated to
Scott Carey sc...@richrelevance.com writes:
Great points. There is one other option that is decent for the WAL:
If splitting out a volume is not acceptable for the OS and WAL -- absolutely
split those two out into their own partitions. It is most important to make
sure that WAL and data
26 matches
Mail list logo