It looked like that worked,
How about large objects (blobs). That is the reason I use custom
format or am I missing a point here?
I guess I might could use a lot of switches and only dump the
blobs in a single file?
Regards,
Søren,
> Ð Ðнд, 07.07.2003, в 00:01, Soeren Laursen пиÑеÑ:
>
Hi:
I notice the following error message in my
postgresql log file:
FATAL : s_lock (0x401db020) at lwlock.c Stuck
spinlock. Aborting
Version of my postgresql :
PostgreSQL 7.2.3 on i686-pc-linux-gnu compiled by GCC
2.96
Operating System : RedHat 7.1
Thanks in advance,
ludwig lim
_
Hi,
I have in my logs the following lines :
jui 7 10:40:07 poseidon logger: FATAL: Non-superuser connection
limit exceeded
jui 7 10:40:44 poseidon logger: LOG: pq_recvbuf: unexpected EOF
on client connection
jui 7 10:40:53 poseidon last message repeated 23 times
What does it means ? Parti
В Пнд, 07.07.2003, в 10:12, Soeren Laursen пишет:
> It looked like that worked,
>
> How about large objects (blobs). That is the reason I use custom
> format or am I missing a point here?
>
> I guess I might could use a lot of switches and only dump the
> blobs in a single file?
I'm sorry, som
Gaetano Mendola wrote:
<[EMAIL PROTECTED]> wrote:
hi,
Sir, i follow your procedure of Postgres installation (version 7.1.3)
on linux 2.4.7-10.
I want to know how to enable syslog and mention the log information
will store in /var/postgreslog directory path..
not sure abt 7.1.3 , but in
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have in my logs the following lines :
>
> jui 7 10:40:07 poseidon logger: FATAL: Non-superuser connection
> limit exceeded
> jui 7 10:40:44 poseidon logger: LOG: pq_recvbuf: unexpected EOF
> on client connection
> jui 7 10:40:53 poseidon last message re
Yes - the lower the number, the faster the query *should* run. It's all
a bit heuristic, and the two values at each stage are lower and higher
cumulative estimates.
The main difference is the 'index scan' versus 'seq scan' - anything you
are going to do such scans on really should have an index, o
Has anybody implemented a generic implementation
for querying databases during runtime. I am using DAO's for data
access layer. But I want to furthur move the database logic to down one
more layer of abstraction.I am dealing with different kinds of database (eg.
let's say a MySQL, MS Acce
I believe Borland's database engine (part of Delphi
and C++ builder) tried to do this. My experience with it, albeit some years ago
now, was that it was extremely slow.
Regards
Donald Fraser
- Original Message -
From:
Vinay
To: [EMAIL PROTECTED]
Sent: Monday, July 07,
On Mon, 7 Jul 2003 [EMAIL PROTECTED] wrote:
> Hi,
>
> when I do an explain on a certain query, I have this answer :
>
>
> QUERY PLAN
> --
> Aggregate (cost=100017927.48..100
Thanks for your answer !
Yes, I set up enable_seqscan = off !
Thanks !
On 7 Jul 2003 at 8:46, Stephan Szabo wrote:
>
> On Mon, 7 Jul 2003 [EMAIL PROTECTED] wrote:
>
> > Hi,
> >
> > when I do an explain on a certain query, I have this answer :
> >
> >
> > QU
Hi:
Somebody knows a site in Mexico or USA that offer web hosting service, based on
postgresql ?
thanks in advanced?
Best regards, Joselo
ISC Jose Luis Orduña Centeno
Tecnologías de Información
CIATEQ A.C. Centro de Tecnología Avanza
Hi-
I'm getting the following error message:
pg_dump: [tar archiver] could not write to tar member (wrote 39, attempted
166)
Here are the particulars:
-I'm running this command: "pg_dump -Ft prod | prod.dump.tar" (The database
is named prod)
-The dump gets about 1/4 of the way through, and the
On Mon, 7 Jul 2003, Nick Fankhauser wrote:
> -There is plenty of disk space available.
Does it stop at a filesize limit imposed by the OS or filesystem, such
as 2.0GB as commonly found on linux, or NFS?
Sam
---(end of broadcast)---
TIP 8: explain
which is the best tool for exporting data from
mssql to postgresql
i try with pgAdmin II, but it's slow
i try directly by odbc from mssql but don't
work
:-) Sidar Lopez Cruz- Cero Riesgo,
S.A.
On Fri, 4 Jul 2003, Daniel Seichter wrote:
> Hello Sam and Scott
> > We'll be looking at more performance boosts once the system is mature.
> > We already have the DB on a dedicated drive, on a dedicated controller.
> > Moving to a RAID 1 config with another 120GB drive would be good. The DB
> > o
I need to grant access to all tables for all users on a particular
database. I've tried:
GRANT ALL ON databasename to public;
But it complained the databasebase (relation) does not exist. Do I have to
grant on each table in a separate statement? I'm guessing not.
Naomi
-- CONFIDENTIALITY
Hi admins,
We are running production PostgreSQL (7.2.3) servers on beefy
systems: 4x Xeon 1.4GHz; and 12GB of RAM.
We are trying to determine optimal settings for
shared_buffers and the other memory-related performance
tunables, to make the best use of our 12GB. But we are not
sure what limits w
> Does it stop at a filesize limit imposed by the OS or filesystem, such
> as 2.0GB as commonly found on linux, or NFS?
No, in this case, it is stopping at about 1.3 GB uncompressed. I usually
pipe the pg_dump output into gzip but removed the gzip to simplify the
situation while testing. Under
Naomi Walker wrote:
I need to grant access to all tables for all users on a particular
database. I've tried:
GRANT ALL ON databasename to public;
But it complained the databasebase (relation) does not exist. Do I have to
grant on each table in a separate statement? I'm guessing not.
The syn
On Mon, 7 Jul 2003, Chris Miles wrote:
> Hi admins,
>
> We are running production PostgreSQL (7.2.3) servers on beefy
> systems: 4x Xeon 1.4GHz; and 12GB of RAM.
>
> We are trying to determine optimal settings for
> shared_buffers and the other memory-related performance
> tunables, to make the
Dear Admin,
Same Problem with Mr.Sidar Lopez Cruz too,but I
want to Migration my data from microsoft Excel to Postgresql, there are any
possible to Migration..???
best regard,
-devi munandar
- Original Message -
From:
Sidar
Lopez Cruz
To: [EMAIL PROTECTED]
Sent:
Hi,
I've had to do this several times.
So far I've found no simple answer/way.
This is how I do it.
a) export excel file to tab seperated file.
b) Import complete csv/text file into a table using copy. ( Eg.
temp_table )
c) Select from that temp table into new tables to build the new
database
On Mon, Jul 07, 2003 at 04:01:12PM -0600, scott.marlowe wrote:
>
> This increase in buffers isn't free, since it will now cost Postgresql the
> overhead of manageing said buffers, plus they're in unix shared memory,
> which isn't all that fast compared to kernel level cache.
Also, I learned thr
Hi,
when I do an explain on a certain query, I have this answer :
QUERY PLAN
--
Aggregate (cost=100017927.48..100017927.48 rows=1 width=8)
-> Seq Scan on stats_daily_20
25 matches
Mail list logo