20/01/2003 12:38:20 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>"Moritz Lennert" <[EMAIL PROTECTED]> writes:
>> One question I asked myself is whether the use of char(2) is the best
>> option.
>
>It may not be, particularly if you are running in a non-English locale.
>Comparisons will invoke the stan
Michiel Lange wrote:
Would the same work for pg_user and pg_group?
See:
http://www.brasileiro.net:8080/postgres/cookbook/view-one-recipe.adp?recipe_id=11378
With these groups:
regression=# select * from pg_group;
groname | grosysid |grolist
-+--+---
grp1|
Would the same work for pg_user and pg_group?
It would be handy at times to easily check wether or not someone is member
of a group...
and since in pg_group the usernumbers are stored, one might need to do a
few lookups:
would it be hard to put such a thing in a view, or is that not-smart
think
Guy Fraser wrote:
This is what I want to do:
select attribute,array_list(values,1,sizeof(values)) as value from av_list;
Turn :
attr6 | {val3,val7,val4,val5}
Into :
attr6 | val3
attr6 | val7
attr6 | val4
attr6 | val5
You didn't mention the version of PostgreSQL. If you're using < 7.3, go
Moritz,
> I'm sorry, I didn't realize this, it is certainly closer to what I
> need.
> Next time I'll look better (google brought up references to this
> list, so
> I subscribed here).
Hey, there are 18 active lists now ... we don't expect anyone to get
the right one right off!
-Josh Berkus
---
Moritz Lennert wrote:
Hello,
I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
MB RAM.
Some queries I launch take quite a long time, and I'm wondering whether
this is normal,or whether I can get better performance somehow.
As an example, I have a field which is in char(2)
"Moritz Lennert" <[EMAIL PROTECTED]> writes:
> One question I asked myself is whether the use of char(2) is the best
> option.
It may not be, particularly if you are running in a non-English locale.
Comparisons will invoke the standard library routine strcoll(), which is
depressingly slow in some
> Moritz,
>
> There is a performance mailing list at:
>
> [EMAIL PROTECTED]
>
> --Josh
I'm sorry, I didn't realize this, it is certainly closer to what I need.
Next time I'll look better (google brought up references to this list, so
I subscribed here).
Thanks for the hint,
Moritz
Hi
I am trying to write a function to step through an array and output each value
as a set {list}, I think.
This is what I want to do:
select attribute,array_list(values,1,sizeof(values)) as value from av_list;
Turn :
attr6 | {val3,val7,val4,val5}
Into :
attr6 | val3
attr6 | val7
attr6 |
On Mon, 20 Jan 2003 18:10:24 +0100 (CET), "Moritz Lennert"
<[EMAIL PROTECTED]> wrote:
>I'll try that, although I haven't changed any of the tuples since import
>of the data (this is a static table...)
Then I must have miscalculated something :-( What does VACUUM VERBOSE
ANALYZE say?
>> From wha
Moritz,
There is a performance mailing list at:
[EMAIL PROTECTED]
--Josh
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
> On Mon, 20 Jan 2003 16:42:12 +0100 (CET), "Moritz Lennert"
> <[EMAIL PROTECTED]> wrote:
>>explain analyze select commune_residence, type_diplome from rec81 where
>>type_diplome = '11';
>>NOTICE: QUERY PLAN:
>>
>>Seq Scan on rec81 (cost=0.00..120316.30 rows=177698 width=15) (actual
>>time=23.03.
> On Mon, 20 Jan 2003 12:40:34 +0100 (CET), "Moritz Lennert"
> <[EMAIL PROTECTED]> wrote:
>>I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
MB RAM.
>>Some queries I launch take quite a long time, and I'm wondering whether
this is normal,or whether I can get better performan
On Mon, 20 Jan 2003 12:40:34 +0100 (CET), "Moritz Lennert"
<[EMAIL PROTECTED]> wrote:
>I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
>MB RAM.
>Some queries I launch take quite a long time, and I'm wondering whether
>this is normal,or whether I can get better performance s
> From: Tom Lane [mailto:[EMAIL PROTECTED]]
> > Limit (cost=22669.68..22669.68 rows=95 width=372)
> > -> Sort (cost=22669.68..22669.68 rows=96 width=372)
> > -> Index Scan using agentresults2_modified_user,
> > agentresults2_modified_user, agentresults2_modified_user on
> agentresult
Hello,
I have a table with some 2.2 million rows on a Pentium4, 1.8GHz with 512
MB RAM.
Some queries I launch take quite a long time, and I'm wondering whether
this is normal,or whether I can get better performance somehow.
As an example, I have a field which is in char(2), with 23 different
poss
>
> there can be many such tables that have foreign key id which is referencing
> the master table test_master column "id". what we want is when some ids become
^^^
It should be profile_master, right?
> redundant we have to merge two ids , we want that thru out the sy
> mv camper.dump20020116 camper_bak/
>
> The error I get is ::
>
> mv: camper.dump20020116: Value too large for defined data type
>
It may not be a postgresql problem. It may be due to your filesystem. Try
the documentation or mailing list of your filesystem.
regards,
bhuvaneswaran
---
Pedro Igor wrote:
How is this function ?
plpgsql_call_handler() RETURNS language_handler AS
'plpgsql_call_handler' LANGUAGE 'c' VOLATILE;
I think it is just pl/pgsql handler, which is created depending on your version
after creation of database or after "createlang plpgsql "
Regards,
Tomasz M
19 matches
Mail list logo