Dear all,
I have a simple query mentioned below :
select count(*) from page_content where (content like '%Militant%'
OR content like '%jihad%' OR content like '%Mujahid%' OR
content like '%fedayeen%' OR content like '%insurgent%' OR content
like '%terrORist%' OR
content like '%cadre%'
There's a generic trigger that sends a signal to a process whenever
changes are made (via listen/notify mechanism), but when FK cascade
fires it will cause a mass amount of notifies to be send out and I
want to avoid it.
2011/3/18 David Johnston pol...@yahoo.com:
Don't know if this would work
On Fri, 18 Mar 2011 12:30:50 +0530
Adarsh Sharma adarsh.sha...@orkash.com wrote:
Dear all,
I have a simple query mentioned below :
select count(*) from page_content where (content like
'%Militant%' OR content like '%jihad%' OR content like
'%Mujahid%' OR content like '%fedayeen%' OR
Autovacuum is generally more effective as it can run when it needs to
not having to wait til the end of the day. If you delete big chunks
several times a day autovac can keep up. Also, it's enabled by
default in 8.4 and up so the end user would have to actively turn it
off in this instance.
On
So we are in the process of converting our databases from SQL_ASCII to
UTF8. If a particular row won't import because of the encoding issue we
get an error like:
pg_restore: [archiver (db)] Error from TOC entry 5317; 0 1266711 TABLE
DATA logs postgres
pg_restore: [archiver (db)] COPY failed:
Hi All
I am using postgres-8.4.0 and psqlodbc-08.03.0400-1 and unixODBC-2.2.14-000.01
driver to connect
to the database. One of the queries I executed from my application have got
stuck for an
indefinite amount of time causing my application to hang. So I cored the
application. The
core file
Our application supports pg 8.4 onwards. The vacuums are run automatically by
the application rather than requiring end users to enable autovacuum; the
vacuums being performed are regular.
-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com]
Sent: Thursday, March
What are the general guidelines under which autovacuum will trigger? I was
unaware it was turned on by default for the newer versions. Would it be
worthwhile to leave the manual vacuuming on? Currently it runs immediately
after large sections of the tables are deleted. Or would it be
Oh, sorry for that.
My client code is Npgsql. I pulled those bytes from a debugging session
directly from the network stream. I wanted to know what bytes npgsql was
receiving.
This is the method which reads the data:
public static String ReadString(Stream network_stream)
{
On Fri, Mar 18, 2011 at 4:31 AM, tamanna madaan
tamanna.ma...@globallogic.com wrote:
Hi All
I am using postgres-8.4.0 and psqlodbc-08.03.0400-1 and
unixODBC-2.2.14-000.01 driver to connect
to the database. One of the queries I executed from my application have got
stuck for an
indefinite
This helped, is now down from 14.9 min to 10.9 min to run the entire script.
Thanks.
Dne 17.3.2011 19:29, Davenport, Julie napsal(a):
I have not yet had time to try Tomas' suggestion of bumping up the
work_mem first (trying to figure out how to do that from within a
coldfusion script).
My TO_CHAR function is now an hour off thanks to Daylight Savings Time.
The dates are correct (I'm in EST: TZ=-04) but my function now returns TIME
an hour early.
(prior to DST we were TZ=-05).
TIMESTAMPTZ data (v_dt): 2011-03-17 18:21:50-04
FUNCTION SNIPPET: to_char(v_dt, 'mm/dd/ hh:mi AM')
I found a work around... Not sure why this is the behavior
select extract ( HOUR FROM (TIMESTAMP '2010-01-01 00:00:00' at time zone
'utc' ) at time zone 'utc' ) gives what I expect would be the correct
answer
BUT..
select extract ( EPOCH FROM (TIMESTAMP '2010-01-01 00:00:00' at time zone
'utc'
On 03/17/2011 05:05 PM, bubba postgres wrote:
Is this the correct behavior? It seems like if I specify the utc
offset it should be 0, not 16.. It seems to be the opposite behavior
from extract epoch.
select extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' ) as
defhour, extract ( HOUR FROM
Thanks for your reply Merlin .
If I am getting you correctly, you mean to say that I should check for waiting
queries in pg_stat_activity table
while my application is hung at SOCK_wait_for_ready function call . Right ??
Yes I am opening multiple seesions with ODBC driver and my
On 03/18/2011 07:59 AM, jonathansfl wrote:
My TO_CHAR function is now an hour off thanks to Daylight Savings Time.
The dates are correct (I'm in EST: TZ=-04) but my function now returns TIME
an hour early.
(prior to DST we were TZ=-05).
TIMESTAMPTZ data (v_dt): 2011-03-17 18:21:50-04
FUNCTION
Dne 18.3.2011 16:42, Davenport, Julie napsal(a):
This helped, is now down from 14.9 min to 10.9 min to run the entire script.
Thanks.
Still, it's way slower than the 8.0 :-(
regards
Tomas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
I was setting TIME ZONE to 'EST' in my formatting function, and I've now
learned that EST is NOT the same as 'America/New_York', as EST is not
DST-sensitive.
I mistyped, the 2011-03-17 18:21:50-04 should have been 2011-03-17
10:21:50-04
Thank you all!!
-Original Message-
From: Steve
On 03/18/2011 10:17 AM, bubba postgres wrote:
Thank you for your thorough reply. It will take some time to digest
your advice, but yes, I am specifically trying to avoid all TZ issues
by using UTC everywhere all the time. My assumption was that Timestamp
without timezone meant UTC, guess not.
On Fri, Mar 18, 2011 at 11:39 AM, tamanna madaan
tamanna.ma...@globallogic.com wrote:
Thanks for your reply Merlin .
If I am getting you correctly, you mean to say that I should check for
waiting queries in pg_stat_activity table
while my application is hung at SOCK_wait_for_ready function
On Fri, Mar 18, 2011 at 10:42 AM, Davenport, Julie jdavenp...@ctcd.edu wrote:
This helped, is now down from 14.9 min to 10.9 min to run the entire script.
Thanks.
can you try disabling nestloop and see what happens? In the session,
before running the query, isssue:
set enable_nestloop =
We're considering using postgres as a way to host database services for many,
many independent applications. One obvious way to do this is with schemas,
roles, and proper permissions, but that still leaves open the possibility for
some poorly written application to leave open transactions and
On 18/03/2011 19:17, Ben Chobot wrote:
if we're talking an extra 50MB of memory per cluster, that will start to add up.
Consider this: each such cluster will have:
a) its own database files on the drives (WAL, data - increasing IO)
b) its own postgresql processes (many of them) running in
On Mar 18, 2011, at 11:47 AM, Ivan Voras wrote:
On 18/03/2011 19:17, Ben Chobot wrote:
if we're talking an extra 50MB of memory per cluster, that will start to add
up.
Consider this: each such cluster will have:
a) its own database files on the drives (WAL, data - increasing IO)
Oh,
Hi !
Is there a way to use plpgsql copy type to get an array of a certain type ?
For example if I have a type sample%TYPE
How can I declare a variable that is an array of sample%TYPE
I can't get it to work, is there a way to do it ?
Best Regards
Dan S
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
ow...@postgresql.org] On Behalf Of Ben Chobot
Sent: Friday, March 18, 2011 3:10 PM
To: Ivan Voras
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] multi-tenant vs. multi-cluster
On Mar 18,
hi all -
I am trying to install the pgagent on solaris. when I do the
ccmake /opt/postgres/apps/pgAgent-3.0.1-Source, I am getting the following
error.
CMake Error at cmake/FindWX.cmake:271 (MESSAGE):
The selected wxWidgets configuration (version: 2.6, debug: no, static:
On Mar 18, 2011, at 12:34 PM, Nicholson, Brad (Toronto, ON, CA) wrote:
b) its own postgresql processes (many of them) running in memory
I believe this is entirely a function of client connections.
With a single instance, you can use connection pooling to reduce the overall
number of
On Fri, Mar 18, 2011 at 2:20 PM, Dan S strd...@gmail.com wrote:
Hi !
Is there a way to use plpgsql copy type to get an array of a certain type ?
For example if I have a type sample%TYPE
How can I declare a variable that is an array of sample%TYPE
I can't get it to work, is there a way to do
We are excited to announce Surge 2011, the Scalability and Performance
Conference, to be held in Baltimore on Sept 28-30, 2011. The event focuses
on case studies that demonstrate successes (and failures) in Web
applications and Internet architectures. This year, we're adding Hack Day on
September
-Original Message-
From: Ben Chobot [mailto:be...@silentmedia.com]
Sent: Friday, March 18, 2011 3:45 PM
To: Nicholson, Brad (Toronto, ON, CA)
Cc: pgsql-general General
Subject: Re: [GENERAL] multi-tenant vs. multi-cluster
On Mar 18, 2011, at 12:34 PM, Nicholson, Brad (Toronto,
On Fri, Mar 18, 2011 at 2:44 PM, Ben Chobot be...@silentmedia.com wrote:
On Mar 18, 2011, at 12:34 PM, Nicholson, Brad (Toronto, ON, CA) wrote:
b) its own postgresql processes (many of them) running in memory
I believe this is entirely a function of client connections.
With a single
2011/3/18 Dan S strd...@gmail.com:
Hi !
Is there a way to use plpgsql copy type to get an array of a certain type ?
For example if I have a type sample%TYPE
How can I declare a variable that is an array of sample%TYPE
I can't get it to work, is there a way to do it ?
No, it's not supported
On Fri, Mar 18, 2011 at 1:44 PM, Ben Chobot be...@silentmedia.com wrote:
OK, so are there any good ways to keep a bad/clueless user from gumming up a
whole cluster? Something like statement_timeout, but for transactions, seems
like it would be idle.
Newer versions of postgresql aren't quite
On Friday, March 18, 2011 12:36:07 pm akp geek wrote:
hi all -
I am trying to install the pgagent on solaris. when I do the
ccmake /opt/postgres/apps/pgAgent-3.0.1-Source, I am getting the following
error.
CMake Error at cmake/FindWX.cmake:271 (MESSAGE):
The selected
thank you for the clues. I am downloading the package and will install and
update you
Regards
On Fri, Mar 18, 2011 at 4:45 PM, Adrian Klaver adrian.kla...@gmail.comwrote:
On Friday, March 18, 2011 12:36:07 pm akp geek wrote:
hi all -
I am trying to install the pgagent on
Well I had the hope to only define the type used for this column in one
place.
Now I needed an array of the same type do some procedural calculations.
I figured that the best way was to get an array of this type by copying the
type used to fill the array.
Anyway I will declare the array the usual
Geoffrey Myers li...@serioustechnology.com writes:
So we are in the process of converting our databases from SQL_ASCII to
UTF8. If a particular row won't import because of the encoding issue we
get an error like:
pg_restore: [archiver (db)] Error from TOC entry 5317; 0 1266711 TABLE
DATA
Francisco Figueiredo Jr. franci...@npgsql.org writes:
My database has encoding set to UTF-8 although my lc_collate is pt.BR.UTF-8
this lc setting my have cause some trouble?
Hmmm ... actually, it strikes me that this may be a downcasing problem.
PG will try to feed an unquoted identifier
Hi,
I'm playing around with array of types but don't get the intuitive
syntax really.
Given the test snippet below, why do the following insert attempts fail?
The literal constant would be the most intuitive syntax. The attempt
below also closely follows the documentation AFAIK:
Hm,
I'm using osx 10.6.6 and I compiled PG myself from source. Is there any
configure option or any library I may use to get the correct behavior? Is
there any runtime setting I can make to change this tolower() behavior,
maybe skip the call?
Thanks in advance.
--
Sent from my Android phone
41 matches
Mail list logo