Hi everybody,
I want to enter a .CSV file using COPY comand and plpgsql.
It enters latitude,longitude and some data. In the CSV data there is no
field (such as "user_name" or current_time) that allow distinguish future
queries for different users (ex: select x,y from table where user_name=z;
A. Kretschmer wrote:
> Again: an index can't help! Because of MVCC: 'select count(*)' without
> WHERE-condition forces an seq. table-scan.
That has very little to do with MVCC.
If I understand correctly, MVCC is about the availability of records in
the current transaction. For that it makes no d
Mike Charnoky wrote:
> With respect to the ALTER TABLE SET STATISTICS... how do I determine a
> good value to use? This wasn't really clear in the pg docs. Also, do I
> need to run ANALYZE on the table after I change the statistics?
>
> Here are the EXPLAINs from the queries:
>
> db=# explain s
Hi everybody,
I want to enter a .CSV file using COPY comand and plpgsql. It enters
lat,lon and some data. In the CSV data there is no field (such as
"user_name" or current_time) that allow distinguish future queries for
different users (ex: select x,y from table where user_name=z; after enteri
rihad wrote:
> Richard Broersma Jr wrote:
> UPDATE Foo foo
> SET ...
> FROM LEFT JOIN Bar bar USING(common_field)
> WHERE blah='blah' AND bar.common_field IS NULL;
>
> ERROR: syntax error at or near "JOIN"
>
>
> I know I'm misusing UPDATE ... FROM because I don't really want Bar's
> values to g
On Monday 01 October 2007 05:20:52 pere roca wrote:
> Hi everybody,
> I want to enter a .CSV file using COPY comand and plpgsql. It enters
> lat,lon and some data. In the CSV data there is no field (such as
> "user_name" or current_time) that allow distinguish future queries for
> different use
Why the Fun_ABC1 is created and Fun_ABC12 is raising the following
error, while run through psql, ( I Could create both the functions from
PgAdmin III query )
ERROR: invalid byte sequence for encoding "UTF8": 0x93
HINT: This error can also happen if the byte sequence does not match
the enco
"Alban Hertroys" <[EMAIL PROTECTED]> writes:
> Mike Charnoky wrote:
>> With respect to the ALTER TABLE SET STATISTICS... how do I determine a
>> good value to use? This wasn't really clear in the pg docs. Also, do I
>> need to run ANALYZE on the table after I change the statistics?
>>
>> Here a
Alban Hertroys wrote:
> A. Kretschmer wrote:
>> Again: an index can't help! Because of MVCC: 'select count(*)'
without
>> WHERE-condition forces an seq. table-scan.
>
> That has very little to do with MVCC.
>
> [...] For that it makes no difference whether a seq
> scan or an index scan is perform
Anoo Sivadasan Pillai wrote:
Why the Fun_ABC1 is created and Fun_ABC12 is raising the following
error, while run through psql, ( I Could create both the functions from
PgAdmin III query )
ERROR: invalid byte sequence for encoding "UTF8": 0x93
HINT: This error can also happen if the byte seq
On Sun, Sep 30, 2007 at 11:36:23PM +0200, Terry Yapt wrote:
> Hello all,
>
> I would like to change a pg database to another server.
>
> The source environment is: postgresql Windows v.8.2.4 (windows xp
> workstation).
> The target environment is: postgresql Windows v.8.2.5 (windows 2003 Server)
Aside from the messy nomenclature, is anyone able to spot why
the "sum" column from the first query is not returning 7, as
the second query suggests that it should? I know that this is
probably simple, and that It's probably going to jump out at
me the minute I hit "Send", but if I don't hit sen
Anoo Sivadasan Pillai wrote:
> Why the Fun_ABC1 is created and Fun_ABC12 is raising the
> following error, while run through psql, ( I Could create
> both the functions from PgAdmin III query )
>
> ERROR: invalid byte sequence for encoding "UTF8": 0x93
Because the characters you entered into
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>
> I don't care if it's part of the SQL standard or not. I don't care if
> oracle does it or not. You're losing mysql converts as they go
> through the tutorial and get to this point. Or worse, they just "grant
> all" because it's easie
Naz Gassiep wrote:
> Aside from the messy nomenclature, is anyone able to spot why
> the "sum" column from the first query is not returning 7, as
> the second query suggests that it should? I know that this is
> probably simple, and that It's probably going to jump out at
> me the minute I hit "S
Albe Laurenz skrev:
> Anoo Sivadasan Pillai wrote:
>> Why the Fun_ABC1 is created and Fun_ABC12 is raising the
>> following error, while run through psql, ( I Could create
>> both the functions from PgAdmin III query )
>>
>> ERROR: invalid byte sequence for encoding "UTF8": 0x93
>
> Because th
Sequence scans of an empty table are going to be faster than an index scan,
so the database uses the sequence scan. Put some data in the tables (some
thousands or millions of records) and then see if it uses an index scan.
Ben
""Jan Theodore Galkowski"" <[EMAIL PROTECTED]> wrote in message
ne
Albe Laurenz wrote:
> Alban Hertroys wrote:
>> A. Kretschmer wrote:
>>> Again: an index can't help! Because of MVCC: 'select count(*)'
> without
>>> WHERE-condition forces an seq. table-scan.
>> That has very little to do with MVCC.
>>
>> [...] For that it makes no difference whether a seq
>> scan
> Is this set to be fixed in any particular release?
Depending on what you're doing, this may be overkill, but: I have child
tables that not only need FK constraints, but also triggers and the
functions called by the triggers. So instead of writing this over and over
again, I eventually wrote a si
We are at the moment planning on passing some passwords that are plain
texted in our DB to some encrypted form as now they will be used for
processes that require better security measures.
We started looking at md5() but found that it's easy to crack and one of
the systems uses CHAP authentica
- one table for the years names/ids/etc. (1970, 1; 1971, 2;
1973, 3;
)
If you _do_ need this table (because you want to constrain your
statistical data to only contain a specific set of years, or
because you
need a quick list of available years to select from): Make the year
pri
Hi,
\d my.table is showing me the primary key as:
"unitstat_pkey" PRIMARY KEY, btree (id)
But the looking into the table structure produced by
pg_dump -s -n my -t table db
I'm getting gthe primary key shown as:
ADD CONSTRAINT unistat_pkey PRIMARY KEY (id);
That has been the name of it, but I a
BTW, You didn't actually use type text for your year column, did
you? No
quotes needed then. Otherwise you'd have to make sure your year values
are all the same length or sorting gets... interesting.
Yep, my comment just before concerns especially this paragraph, I
guess. With not only ye
I find it far easier to maintain normalized tables that produced
non-normalized ones (for things like data warehousing) than it is to
maintain non-normalized tables and trying to produce normalized data
from that.
Ok, I do understand that.
So, instead of the earlier mentioned database design, I
Martin Marques wrote:
We are at the moment planning on passing some passwords that are plain
texted in our DB to some encrypted form as now they will be used for
processes that require better security measures.
We started looking at md5() but found that it's easy to crack and one of
the syste
Stefan Schwarzer wrote:
>
>
>> BTW, You didn't actually use type text for your year column, did you? No
>> quotes needed then. Otherwise you'd have to make sure your year values
>> are all the same length or sorting gets... interesting.
>
> Yep, my comment just before concerns especially this pa
I altered the table in question, with "set statistics 100" on the
timestamp column, then ran analyze. This seemed to help somewhat. Now,
queries don't seem to hang, but it still takes a long time to do the count:
* "where evtime between '2007-09-26' and '2007-09-27'"
took 37 minutes to run (r
An entirely different question is whether it is a good idea to write a
range as a value that the database cannot interpret correctly
(referring
to the '1970-75' notation). You cannot group records by value this way
if you need to (for example) combine data from '1970' with data from
'1970-75'
"Mike Charnoky" <[EMAIL PROTECTED]> writes:
> I altered the table in question, with "set statistics 100" on the
> timestamp column, then ran analyze. This seemed to help somewhat. Now,
> queries don't seem to hang, but it still takes a long time to do the count:
> * "where evtime between '2007-
On 9/27/07, Jan Theodore Galkowski <[EMAIL PROTECTED]> wrote:
> I fear this has been asked many times about PostgreSQL, and I have read
> the docs about how indexes are supposed to be defined and used, but I
> don't understand why the engine and optimizer is doing what it does in
> the simplest of
Stefan Schwarzer wrote:
>
>> An entirely different question is whether it is a good idea to write a
>> range as a value that the database cannot interpret correctly (referring
>> to the '1970-75' notation). You cannot group records by value this way
>> if you need to (for example) combine data fro
Hi,
I noticed a small bug/problem when restoring a database that uses inheritance.
Lets say you have a table B that inherits from table A.
Current state:
Table B has a primary key with sequence b_seq. Table A also has a
primary key with sequence a_seq.
Now we create a backup and restore the da
"Sebastjan Trepca" <[EMAIL PROTECTED]> writes:
> Current state:
> Table B has a primary key with sequence b_seq. Table A also has a
> primary key with sequence a_seq.
In view of the fact that primary keys aren't inherited, and do not
"have sequences", this description is uselessly imprecise. Ple
Webb Sprague escribió:
> > > Is it
> > > possible to have FK that spans into child tables?
> >
> > This is a well known (and documented, see [1]) deficiency. It's due to
> > the current implementation of indices, which are bound to exactly one
> > table, meaning they do return a position within
Gregory Stark ha scritto:
"Nico Sabbi" <[EMAIL PROTECTED]> writes:
nb1=# select r.a from (select row(tab1.*)::tab1 as r from tab1)x;
ERROR: missing FROM-clause entry for table "r"
LINE 1: select r.a from (select row(tab1.*)::tab1 as r from tab1)x;
^
I tried many variations (in
This is strange... count(*) operations over a period of one day's worth
of data now take ~1-2 minutes to run or ~40 minutes. It seems that the
first time the data is queried it takes about 40 minutes. If I try the
query again, it finishes in 1-2 minutes!
Again, nothing else is happening on this
I don't care if it's part of the SQL standard or not. I don't care if
oracle does it or not. You're losing mysql converts as they go
through the tutorial and get to this point. Or worse, they just "grant
all" because it's easier, thus causing security holes. User
friendliness matters.
There's l
Hi All,
I want to implement data cube operator in PostGReSQL. I searched few
forums and found that I only can have interface to postgresql using
EFEU package which allows basic cube operations.
Is there any other way of extending postgresql to include cubes?
thanks
Gowrishankar
---
Hi,
> /var/run/ might be on a temporary file system. So you need to adjust
> your init script to create that directory if it doesn't exist.
That is what I tried now and it works for now.
I never had installed the Debian's PostgreSQL packages and I once
manually installed those init.d-script. As
Is it possible to somehow query the system catalog to find out which
clusters/databases have a certain schema?
When running the query:
SELECT * FROM pg_catalog.pg_namespace WHERE nspname = 'myschema';
It always only finds data for the current session, not all clusters,
even when connected as pos
I am now importing the dump file of wikipedia into my postgresql using
maintains/importDump.php. It fails on 'ERROR: invalid byte sequence
for encoding UTF-8'. Is there any way to let pgsql just ignore the
invalid characters ( i mean that drop the invalid ones ), that the
script will keep going wit
Hello,
I need to import between 100 millions to one billion records in a
table. Each record is composed of two char(16) fields. Input format
is a huge csv file.I am running on a linux box with 4gb of ram.
First I create the table. Second I 'copy from' the cvs file. Third I
create the index on the
Thanks for your answers guys. I've got a cold right now and my brain
is mush, so I can't comment intelligently on your suggestions just
yet. I just wanted to express my thanks for your time.
Jeff, one book you might want to look at is Joe Celko's Trees and
Hierarchies in SQL for Smarties.
http://
Hi,
Are there any limitations on number of child tables that can be use
in
partitioned table?
I am currently having weekly partitioned tables (using partitioned
view in SQL Server) that I kept for 2 years.
In total, there will be 52 * 2 = 104 tables exist at one time in the
partition.
I am mig
Hi!
I have this problem that Im not sure if my stored data has the correct
coding. When I view the data from a postgres console I just see the
characters and depending on my console-encoding it looks differently.
What I really want to see is the hexadecimal or octal value of the bytes of
the ret
On Monday 01 October 2007, Mike Charnoky <[EMAIL PROTECTED]> wrote:
> This is strange... count(*) operations over a period of one day's worth
> of data now take ~1-2 minutes to run or ~40 minutes. It seems that the
> first time the data is queried it takes about 40 minutes. If I try the
> query a
On 10/1/07, pere roca ristol <[EMAIL PROTECTED]> wrote:
> Hi everybody,
> I want to enter a .CSV file using COPY comand and plpgsql.
> It enters latitude,longitude and some data. In the CSV data there is no
> field (such as "user_name" or current_time) that allow distinguish future
> queries
I'm going to move from v7.4.13 to v8.2.0 (suse-64). I have 2
pre-existing DBs. Do I need to "convert" or port them to v8 in any way
after I start up with a v8 postmaster?
Thanks
-dave
In response to Mike Charnoky <[EMAIL PROTECTED]>:
> This is strange... count(*) operations over a period of one day's worth
> of data now take ~1-2 minutes to run or ~40 minutes. It seems that the
> first time the data is queried it takes about 40 minutes. If I try the
> query again, it finishes
Mike Charnoky wrote:
> This is strange... count(*) operations over a period of one day's worth
> of data now take ~1-2 minutes to run or ~40 minutes. It seems that the
> first time the data is queried it takes about 40 minutes. If I try the
> query again, it finishes in 1-2 minutes!
This is just
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Goboxe
> Sent: Monday, October 01, 2007 2:18 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Partitioned table limitation
>
> Hi,
>
> Are there any limitations on number of c
On Monday 01 October 2007, "Gauthier, Dave" <[EMAIL PROTECTED]> wrote:
> I'm going to move from v7.4.13 to v8.2.0 (suse-64). I have 2
> pre-existing DBs. Do I need to "convert" or port them to v8 in any way
> after I start up with a v8 postmaster?
>
All major version upgrades require a dump and r
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Gauthier, Dave wrote:
> I'm going to move from v7.4.13 to v8.2.0 (suse-64). I have 2
> pre-existing DBs. Do I need to "convert" or port them to v8 in any way
> after I start up with a v8 postmaster?
>
1. v8.2.0 is a mistake, make sure you are runn
"Mike Charnoky" <[EMAIL PROTECTED]> writes:
> Here is the output from EXPLAIN ANALYZE. This is the same query run
> back to back, first time takes 42 minutes, second time takes less than 2
> minutes!
That doesn't really sound strange at all. It sounds like you have a very slow
disk and very lar
On Oct 1, 2007, at 12:26 PM, Gauthier, Dave wrote:
I’m going to move from v7.4.13 to v8.2.0 (suse-64). I have 2 pre-
existing DBs. Do I need to “convert” or port them to v8 in any way
after I start up with a v8 postmaster?
Thanks
Moving between major release versions requires that you do
On 10/1/07, Jan Theodore Galkowski <[EMAIL PROTECTED]> wrote:
> Scott,
>
> i didn't think this belonged in the general list, but the example i gave
> for discussion was a toy, for illustration. i could not very well post
> the actual example for many reasons, including proprietary ones and,
> give
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello,
The PostgreSQL Conference Fall 2007 to date, has been a huge success. Of
course we haven't actually held the conference yet but already we have a
strong line of speakers and sponsors confirmed.
If you haven't registered, now is your chance, ju
thanks for all your useful comments. i will study all of them.
a couple of inline comments below, just for clarification to the group,
marked with asterisks.
On Mon, 1 Oct 2007 13:13:23 -0500, "Scott Marlowe"
<[EMAIL PROTECTED]> said:
> On 10/1/07, Jan Theodore Galkowski <[EMAIL PROTECTED]> wrot
Joshua D. Drake wrote:
The PostgreSQL Conference Fall 2007 to date, has been a huge success. Of
course we haven't actually held the conference yet but already we have a
strong line of speakers and sponsors confirmed.
[ snip ]
I can't attend, but wish I could, is there going to be a web cast? O
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew T. O'Connor wrote:
> Joshua D. Drake wrote:
>> The PostgreSQL Conference Fall 2007 to date, has been a huge success. Of
>> course we haven't actually held the conference yet but already we have a
>> strong line of speakers and sponsors confirme
Hi,
Le Friday 28 September 2007 10:22:49 [EMAIL PROTECTED], vous avez
écrit :
> I need to import between 100 millions to one billion records in a
> table. Each record is composed of two char(16) fields. Input format
> is a huge csv file.I am running on a linux box with 4gb of ram.
> First I crea
Hi,
Le Wednesday 26 September 2007 20:58:38 Gowrishankar, vous avez écrit :
> Is there any other way of extending postgresql to include cubes?
Something like the cube contrib?
http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/cube/
Contribs are part of PostgreSQL sources and maintained
Josh Trutwin wrote:
> Is it possible to somehow query the system catalog to find out which
> clusters/databases have a certain schema?
No, unless you connect to each database in turn.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command
Hi ... I am porting a db schema from MySQL to postgresql and having problem
in defining a column level constraint...
mtime TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
gives me problem at ^ON UPDATE ...
Any pointer for implementing this functionality in postgresql ??
Tx ..
On 9/30/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> I don't care if it's part of the SQL standard or not. I don't care if
> oracle does it or not. You're losing mysql converts as they go
> through the tutorial and get to this point.
If that's all it takes for them to switch, seriously, I
On Oct 1, 2007, at 14:54 , Farhan Khan wrote:
Any pointer for implementing this functionality in postgresql ??
Write a trigger that fires on update and replaces the value with
CURRENT_TIMESTAMP.
You could probably do something like this with rules as well, but the
trigger method is prob
On 10/1/07, Goboxe <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Are there any limitations on number of child tables that can be use
> in
> partitioned table?
>
>
> I am currently having weekly partitioned tables (using partitioned
> view in SQL Server) that I kept for 2 years.
> In total, there will be 52
On 01/10/2007, Martin Marques <[EMAIL PROTECTED]> wrote:
>
> 1) Is MD5's weakness true?
Yes, but not really for using in a password functionality. You are
very unlikely to repeat a password but in any case you will have the
user ID to make it unique.
> 2) Is there any sha1() functions in Postgr
\d my.table is showing me the primary key as:
"unitstat_pkey" PRIMARY KEY, btree (id)
But the looking into the table structure produced by
pg_dump -s -n my -t table db
I'm getting gthe primary key shown as:
ADD CONSTRAINT unistat_pkey PRIMARY KEY (id);
That has been the name of it, but I alte
Magnus Hagander escribió:
On Sun, Sep 30, 2007 at 11:36:23PM +0200, Terry Yapt wrote:
First of all. I think this method is admisible. Isn't it ?
It is.
Glad to read it :-)
And second question: I think my problem is that some rights are wrong
after copying data folder. What a
I'm hoping that someone on the list can help me understand an apparent
discrepancy in the performance information that I'm collecting on a
particularly troublesome query.
The configuration: pg-8.2.4 on RHEL4. log_min_duration_statement = 1m.
In my syslog output, I see entries indicating that t
"Jason L. Buberel" <[EMAIL PROTECTED]> writes:
> In my syslog output, I see entries indicating that the
> JDBC-driver-originated query on a table named 'city_summary' are taking
> upwards of 300 seconds:
> Oct 1 18:27:47 srv3 postgres-8.2[1625]: [12-1]
> LOG: duration: 307077.037 ms execute S
Tom-right-as-usual:
Yep - you were right about the query plan for the prepared statement (a
sequential scan of the table) differed a bit from the directly-executed
version :)
For reference, when using JasperReports .jrxml files as the basis for
the query, I only had to do to the following to
73 matches
Mail list logo