prakash wrote:
Hi All, I am using postgres 8.2. I want to use Write Ahead Log (WAL)
functionality to take a back up. I know one way to activate WAL is through
updating postgres.conf file with archive_command. but how do I activate it on
the server command line? Can we activate it runtime?
You
On Mon, Nov 10, 2008 at 12:54 PM, Bryce Nesbitt <[EMAIL PROTECTED]> wrote:
> I've got a table for which "CLUSTER tablename USING index" makes an order of
> magnitude difference.
>
> Are there ways to determine how "unclustered" this table becomes over time,
> so I can schedule downtime to recluster
I've got a table for which "CLUSTER tablename USING index" makes an
order of magnitude difference.
Are there ways to determine how "unclustered" this table becomes over
time, so I can schedule downtime to recluster? I could use the pg_stat
tables, but this seems awkward.
-Bry
Mario Splivalo wrote:
> Richard Huxton wrote:
>> Mario Splivalo wrote:
>>> I have this issue:
>>>
>>> postgres=# select E'\xc5\x53\x94\x96\x83\x29';
>>> ERROR: invalid byte sequence for encoding "UTF8": 0xc553
>>
>> I think you want to be using octal escapes. That's text you're
>> generating above
Tom Lane wrote:
I'm using the above mentioned string to store data into bytea column. I
did pg_dump of the database on postgres 8.2, and then tried to restore
it on postgres 8.3, and I got this error. The actuall line that produces
error is like this:
INSERT INTO vpn_payins_bitfield (vpn_id,
Mario Splivalo <[EMAIL PROTECTED]> writes:
> I have this issue:
> postgres=# select E'\xc5\x53\x94\x96\x83\x29';
> ERROR: invalid byte sequence for encoding "UTF8": 0xc553
This is expected since the string is not valid as text.
> I'm using the above mentioned string to store data into bytea colu
What about replacing the table by
SELECT * FROM my_table ORDER BY num
i.e. something like
SELECT cat, LAST(id), LAST(num) FROM (SELECT * FROM my_table ORDER BY num)
AS foo GROUP_BY cat;
Hope it works, just guessing it might help :-)
regards
Tomas
> SELECT cat, MAX(num) FROM my_table GROUP_BY
Thanks! That's perfect, because now I don't need the FIRST/LAST
aggregate functions!
Mike
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Fernando Hevia
Sent: Monday, November 10, 2008 10:30 AM
To: Johnson, Michael L.; pgsql-sql@postgresql.org
Subject: R
Hello All:
I am hoping to get some guidance from the local gurus, before I go off
and
write this code.
Consider this scenario:
I receive a file containing inventory info, make it simple, clothes
items.
Within the file there is info for different locations, so an inventory
item
can
Richard Huxton <[EMAIL PROTECTED]> writes:
> Mario Splivalo wrote:
>> That's true, but I'd still like to use hexadecimal notation.
You could use decode():
regression=# select decode('c5a4', 'hex');
decode
--
\305\244
(1 row)
regards, tom lane
--
Sent via pg
Mario Splivalo <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Exactly what version of pg_dump are you using? What I get from pg_dump
>> doesn't look like that. Bytea fields with -D look more like this:
>>
>> INSERT INTO foo (f1) VALUES ('\\305S\\224\\226\\203)');
> Yes, I mistakenly used pg8.
> -Mensaje original-
> De: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] En nombre de Johnson,
> Michael L.
> Enviado el: Lunes, 10 de Noviembre de 2008 12:57
> Para: pgsql-sql@postgresql.org
> Asunto: [SQL] Subsorting GROUP BY data
>
> Given the following table:
>
> ID | Cat | Num
If it is to Group the items by cat field then select the ID where the num is
the highest in group,
You could maybe try
SELECT a.ID, b.Cat,b.Num
FROM my_table a
JOIN ( SELECT cat, MAX(num) as maximo FROM my_table GROUP_BY cat) b
ON a.Cat = b.Cat
AND a.Num = b.maximo;
It 'll probably give what y
Richard Huxton wrote:
> Do you have an index on (id,dt_modified) for manage_followup? Can you
> provide an EXPLAIN ANALYSE for this?
> Hi Richard,
>
> Firstly, thank-you very much for your swift reply. To answer your
> question,
> I had not been using an index on dt_modfied. I have added it now
Given the following table:
ID | Cat | Num
|---|--
Z | A | 0
Y | A | 1
X | A | 2
W | B | 0
V | B | 1
U | B | 2
T | C | 0
S | C | 1
R | C | 2
I want to do this: Group the items by the cat field. Then select th
am Mon, dem 10.11.2008, um 15:34:10 +0100 mailte Mario Splivalo folgendes:
> I have a table, like this:
>
> CREATE TABLE t1
> (
> u1 character varying
> )
>
> And some data inside:
>
> INSERT INTO t1 (u1) VALUES ('62d6b434-7dfd-4b3b-b1bf-87f6c20c10dd');
> INSERT INTO t1 (u1) VALUES ('e3fee596
I have a table, like this:
CREATE TABLE t1
(
u1 character varying
)
And some data inside:
INSERT INTO t1 (u1) VALUES ('62d6b434-7dfd-4b3b-b1bf-87f6c20c10dd');
INSERT INTO t1 (u1) VALUES ('e3fee596-164b-4995-9e0d-7b2a79e83752');
INSERT INTO t1 (u1) VALUES ('37a42ec8-9000-44bc-bb06-13b5d4373a45
I have this issue:
postgres=# select E'\xc5\x53\x94\x96\x83\x29';
ERROR: invalid byte sequence for encoding "UTF8": 0xc553
HINT: This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
postgres=# show clien
Cheers for you help guys. Having filtered and then joined has substantially
reduced the run time.
Much obliged,
Sebastian
On Mon, Nov 10, 2008 at 12:32 PM, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Sebastian Ritter wrote:
> > Could it have something
> > to do with the fact that it is a subquer
Sebastian Ritter wrote:
> Could it have something
> to do with the fact that it is a subquery and thus the planner can not
> deduce filtering conditions from the outer query against it? My apologises
> if that made no sense.
Could make a difference.
> In summary, what im trying to understand is t
Cheers for this Richard. The more I think about it, I believe the join is
being made against ALL issues and followups first and then filtered by my
where clause conditions afterwards. This would in incur a scan against all
15,000 issues and 95,000 followups. Set theory tells me that I should not
us
Hi Helio,
Sorry about the parenthesis - Bad copy/pasting skills! To further discuss
your suggestion: Wouldn't adding n_issue=i.id as a where clause filter cause
the sub-query to become correlated and thus much less efficient ? I may be
wrong, or may have miss-understood your suggestion.
Thanks fo
Hi Sebastian,
- First of all i think there is an open-parenthesis missing in the query
V2.
Maybe in the V2 version you cold restrict the results in the INNER query a
bit more if you use a restriction clause like "WHERE n_issue = i.id" in
that. It will certainly lower the number of rows returned b
Hi All, I am using postgres 8.2. I want to use Write Ahead Log (WAL)
functionality to take a back up. I know one way to activate WAL is through
updating postgres.conf file with archive_command. but how do I activate it on
the server command line? Can we activate it runtime?
Thanks in advance.
-
Bryce,
- I think that the difference between the two queries has to do with the
way postgresql execute them.
In the first the SGDB does:
1º Creates a temporary table with "m" X "n" rows where the "m" and
"n" are the number of the rows in the tables been joined.
2º Take only
Sebastian Ritter wrote:
> A lot of the reports our technical officers submit to us include a listing
> of all actioned issues for a given day along with the last modified followup
> of each said issue. With the number of rows in our database increasing at a
> high rate, these queries are starting t
Hi all,
I was hoping to receive some advise on a slow running query in our business'
Issue Tracking System. To shed some light on the below mentioned queries,
here is a brief summary of how users interact with the system. The two main
components in the system are a Issues and Followups. An Issue i
27 matches
Mail list logo