On Tue, 2007-09-04 at 05:15 +0100, Gregory Stark wrote:
> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
>
> > On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> >> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> >> >
> >>
Hi,
I'm running out of space on one of my partitions and I still have not
gotten all the data loaded yet. I've read that one could symlink the
pg_pg_xlog directory to another drive. I'm wondering if I can do the
same for specific tables as well.
Thanks.
I've already done a pg_dump of the entire
I just browsed to my $PGDATA location and noticed that there are some
tables which has ending of .1
# ls -lahS | egrep '(24694|24702|24926)'
-rw--- 1 postgres postgres 1.0G Sep 3 22:56 24694
-rw--- 1 postgres postgres 1.0G Sep 3 22:52 24702
-rw--- 1 postgres postgres 1.0G Sep 3 22:5
On Tue, 2007-09-04 at 10:06 +0800, Ow Mun Heng wrote:
> On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> > "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> > >-> Bitmap Heap Scan on drv (cost=30.44..4414.39
> > > rows=1291 wid
On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> >
> > How can I persuade PG to use the index w/o resorting to setting seqscan
> > = false
>
> The usual knob to fiddle with is random_page_cost. If yo
Same query, executed twice, once using seqscan enabled and the other
with it disabled. Difference is nearly night and day.
How can I persuade PG to use the index w/o resorting to setting seqscan
= false
(actually, I don't know what are the pro or cons - I read posts from the
archives far back as
On Mon, 2007-08-27 at 18:41 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > Does the psql's \copy command run as a transaction?
>
> Certainly.
>
> > I think it does, but
> > somehow when I cancel (in a script) a running import, &qu
Hi all,
I'm sure some of you guys do perl-dbi to access perl. need some
pointers. (pg specific I guess)
1. Possible to execute queries to PG using multiple statemments?
eg:
prepare("A")
bind_param($A)
execute()
prepare("BB")
bind_param($B)
execute()
prepare("CC")
bind_param($B)
execute()
right
On Thu, 2007-08-30 at 09:14 +0200, A. Kretschmer wrote:
> am Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
> > Is there a way to do a dump of a database using a select statement?
>
> A complete database or just a simple table?
a simple table.. couple millio
Is there a way to do a dump of a database using a select statement?
eg: \copy trd to 'file' select * from table limit 10
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Tue, 2007-08-28 at 08:19 +0100, Richard Huxton wrote:
> Ow Mun Heng wrote:
> > Continuining with my efforts to get similar functionality as mysql's
> > mysqlimport --replace I want to ask for the list's opinion on which is
> > better
>
> I would sugges
Continuining with my efforts to get similar functionality as mysql's
mysqlimport --replace I want to ask for the list's opinion on which is
better
What currently is happening
1. select from mssql (into CSV via PerlDBI)
2. psql\copy into PG
3. pg chokes on duplicate pkeys as there's no --replace o
On Mon, 2007-08-27 at 21:03 -0500, Erik Jones wrote:
> On Aug 27, 2007, at 8:50 PM, Ow Mun Heng wrote:
>
> > Is it possible to name a primary key (composite) primary key rather
> > than have pg default to table_name_pkey??
> >
> > I tried some
Is it possible to name a primary key (composite) primary key rather than
have pg default to table_name_pkey??
I tried something like
primary key pkey_table_short_form_name (a,b,c)
but it didnt' work.
---(end of broadcast)---
TIP 2: Don't 'kill -
Hi,
Does the psql's \copy command run as a transaction? I think it does, but
somehow when I cancel (in a script) a running import, "seems" (I can't
seem to duplicate it on the cli though) like a few lines/rows gets
inserted anyway..
---(end of broadcast)-
On Mon, 2007-08-27 at 11:27 +0200, Dimitri Fontaine wrote:
> We've just made some tests here with 2.2.1 and as this release contains the
> missing files, it works fine without any installation.
Yep.. I can confirm that it works.. I am using the csv example.
Goal : similar functionality much lik
On Mon, 2007-08-27 at 12:22 +0200, Dimitri Fontaine wrote:
> Le lundi 27 août 2007, Ow Mun Heng a écrit :
> > I'm trying to see if pgloader will make my work easier for bulkloads.
> > I'm testing it out and I'm stucked basically because it can't find the
I'm trying to see if pgloader will make my work easier for bulkloads.
I'm testing it out and I'm stucked basically because it can't find the
module TextReader or CSVreader.
Googling doesn't help as there seems to be no reference to a module
named textreader or csvreader.
I'm on Python 2.4.4
Tha
On Mon, 2007-08-27 at 11:55 +0800, Ow Mun Heng wrote:
> I just ran into trouble with this. This rule seems to work when I do
> simple inserts, but as what I will be doing will be doing \copy
> bulkloads, it will balk and fail.
> Now would be a good idea to teach me how to skin the cat
On Tue, 2007-08-14 at 10:16 -0500, Scott Marlowe wrote:
> On 8/14/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
> > the manner in which PG handles duplicate entries either from primary
On Thu, 2007-08-23 at 16:42 +0800, Phoenix Kiula wrote:
> On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> >
> > Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
> > for quite some time now. We had it hitting a pg7.2 db back in the
> > day, when hip kids road around in r
On Tue, 2007-08-14 at 10:16 -0500, Scott Marlowe wrote:
> On 8/14/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> >
> > In MySql, I was using mysqlimport --replace which essentially provided
> > the means to load data into the DB, while at the same time, would
> &g
I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
the manner in which PG handles duplicate entries either from primary
keys or unique entries.
Data is taken from perl DBI into (right now) CSV based files to be used
via psql's \copy command to insert into the table.
In MySql
Hi,
Writing a script to pull data from SQL server into a flat-file (or just
piped in directly to PG using Perl DBI)
Just wondering if the copy command is able to do a replace if there are
existing data in the Db already. (This is usually in the case of updates
to specific rows and there be a time
On Wed, 2007-07-25 at 19:32 +0300, Devrim GÜNDÜZ wrote:
> Hi,
>
> On Sat, 2007-07-21 at 15:57 -0700, Steve Wampler wrote:
> > I need the Java and Python interfaces supplied with
> > (from 8.1.9):
> >
> >postgresql-jdbc-8.1.4-1.centos.1
> >postgresql-python-8.1.9-1.el4s1.1
>
> The actual
On Fri, 2007-08-03 at 07:55 -0600, Josh Tolley wrote:
> On 8/3/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > Can anyone shed some light on this. I just would like to know if
> queries
> > for raw data (not aggregregates) is expected to take a long time.
> > Running
Can anyone shed some light on this. I just would like to know if queries
for raw data (not aggregregates) is expected to take a long time.
Running times between 30 - 2 hours for large dataset pulls.
Involves lots of joins on very large tables (min 1 millon rows each
table, 300 columns per table)
New to PG, just wondering if there's anyway to say.. I want t Full
backup of DB-Sample and I can just tar up the directory containing that
tablespace, copy it to another PG server and then re-attach it?
Thanks
---(end of broadcast)---
TIP 6: explain
On Mon, 2007-07-23 at 05:34 -0400, Chuck Payne wrote:
>
> Hey,
>
> I have spend the last several days looking for a website or how to
> that would show me how to call postgresql in bash script. I know that
> in mysql I can do like this
>
> for i in `cat myfile.txt` ; do mysql -uxxx -p -Ass
201 - 229 of 229 matches
Mail list logo