Hello,
Am 03.01.11 00:06, schrieb Adrian Klaver:
On Sunday 02 January 2011 2:22:14 pm Thomas Schmidt wrote:
well, I'm new to postgres and this is my post on this list :-)
Anyway, I've to batch-import bulk-csv data into a staging database (as
part of an ETL-like pocess). The data ought to be
Hi,
I need advise about a database structure. I need to capture data from the
web about one specific subject on few specific websites and insert that data
to a database. I have done this question here before, but I think I have not
explained very well.
The problem with this task is that the
Hello,
Am 03.01.11 12:11, schrieb Andre Lopes:
Hi,
I need advise about a database structure. I need to capture data from the
web about one specific subject on few specific websites and insert that data
to a database. I have done this question here before, but I think I have not
explained very
Andre Lopes wrote on 03.01.2011 12:11:
array(
'name' = 'Don',
'age' = '31'
);
array(
'name' = 'Peter',
'age' = '28',
'car' = 'ford',
'km' = '2000'
);
In a specific website search I will store only name and age, and
in other website I will store name, age,
I can propose you something like this:
website(id int, url varchar);
attr_def (id int, name varchar);
attr_val (id int, def_id reference attr_def.id, website_id int
references website.id, value varchar);
If all of your attributes in website are single valued then you can
remove id from
Hi,
I am trying to load the libpq library of PostGreSQL 9.0 version on
Linux(Red Hat 3.4.6-3) and it is failing.
Where as I could successfully load the libpq library of PostGreSQL 8.4
version using same code base.
Does postGreSQL 9.0 version support the Red Hat 3.4.6-3 version?
Thanks,
Trupti
On Mon, 2011-01-03 at 17:23 +0530, Trupti Ghate wrote:
I am trying to load the libpq library of PostGreSQL 9.0 version on
Linux(Red Hat 3.4.6-3) and it is failing.
What is the exact Red Hat release? Please send the output of
cat /etc/redhat-release
Regards,
--
Devrim GÜNDÜZ
PostgreSQL
Exact release is:
[r...@tndev-linux-32 Adm]# cat /etc/redhat-release
*Red Hat Enterprise Linux AS release 4 (Nahant Update 5)*
[r...@tndev-linux-32 Adm]#
2011/1/3 Devrim GÜNDÜZ dev...@gunduz.org
On Mon, 2011-01-03 at 17:23 +0530, Trupti Ghate wrote:
I am trying to load the libpq
On Mon, 2011-01-03 at 17:26 +0530, Trupti Ghate wrote:
[r...@tndev-linux-32 Adm]# cat /etc/redhat-release
*Red Hat Enterprise Linux AS release 4 (Nahant Update 5)*
Ok, here are the RPMs for that:
For 32-bit:
http://yum.pgrpms.org/9.0/redhat/rhel-4-i386/repoview/letter_p.group.html
For
Hello,
Am 03.01.11 12:46, schrieb Radosław Smogura:
I can propose you something like this:
website(id int, url varchar);
attr_def (id int, name varchar);
attr_val (id int, def_id reference attr_def.id, website_id int
references website.id, value varchar);
If all of your attributes in website
Where can I find the binary distribution for Rhel4-i386?
2011/1/3 Devrim GÜNDÜZ dev...@gunduz.org
On Mon, 2011-01-03 at 17:26 +0530, Trupti Ghate wrote:
[r...@tndev-linux-32 Adm]# cat /etc/redhat-release
*Red Hat Enterprise Linux AS release 4 (Nahant Update 5)*
Ok, here are the RPMs for
Dear all,
A very-very Happy New Year 2011 to all. May God Bless all of us to solve
future problems.
Thanks and Regards
Adarsh Sharma
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Happy new year
but spare me from any blessings, please
/Nicklas
2011-01-03 skrev Adarsh Sharma :
Dear all,
A very-very Happy New Year 2011 to all. May God Bless all of us to solve
future problems.
Thanks and Regards
Adarsh Sharma
--
Sent via pgsql-general mailing list
Hi,
Thanks for the reply's. I was tempted to accept the Rodoslaw Smogura
proposal. There will be about 100 websites to capture data on daily basis.
Each website adds per day(average) 2 articles.
Thomas talked about the noSQL possibility. What do you think would be
better? I have no experience in
2010/12/30 pasman pasmański pasma...@gmail.com
Hello.
I use Postgres 8.4.5 via perl DBI.
And i try to use cursors WITH HOLD to materialize
often used queries.
My question is how many cursors may be
declared per session and which memory setting s
to adjust for them ?
I believe there's
Thanks for reply.
I do some checking and
some queries boost very well :)
pasman
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hello,
Am 03.01.11 14:14, schrieb Andre Lopes:
Hi,
Thanks for the reply's. I was tempted to accept the Rodoslaw Smogura
proposal. There will be about 100 websites to capture data on daily basis.
Each website adds per day(average) 2 articles.
Thomas talked about the noSQL possibility. What do
Hello,
through some obscure error (probably on my side)
I have several thousand entries for Jan 1 and Jan 2
ending up in the ISO week 2011-52 instead of 2010-52
which breaks the bar chart at the top of my script
http://preferans.de/user.php?id=OK504891003571
# select * from pref_money where
2011/1/3 Alexander Farber alexander.far...@gmail.com:
Hello,
through some obscure error (probably on my side)
I have several thousand entries for Jan 1 and Jan 2
ending up in the ISO week 2011-52 instead of 2010-52
which breaks the bar chart at the top of my script
Thank you Pavel, has worked:
# update pref_money as m1 set money=money+coalesce((select money from
pref_money as m2 where m1.id=m2.id and m2.yw='2011-52'),0) where
m1.yw='2010-52';
UPDATE 2081
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
On 2011-01-03 06:29, Karen Springer wrote:
We are running RHEL 4.1 which is why the newer version did not install with
RHEL.
RHEL 4.1 should be offering pgsql 8.1.15 in the apps channel
(Red Hat Application Stack v1).
- Jeremy
--
Sent via pgsql-general mailing list
On Monday 03 January 2011 12:48:22 am Thomas Schmidt wrote:
Thanks a lot - that's what I need. :-)
Btw. What about indexes?
http://www.postgresql.org/docs/9.0/interactive/populate.html suggests to
remove indexes before importing via copy (for obvious reasons).
Does pgloader take indexes
Joel Jacobson j...@gluefinance.com writes:
2011/1/2 Tom Lane t...@sss.pgh.pa.us
The thing you're missing is that implicit dependencies are really
bidirectional:
So, basically it's not possible to define a recursive query only making use
of pg_depend to build an entire dependency tree of all
On Sunday 02 January 2011 11:12:25 pm Karen Springer wrote:
Hi Adrian,
Yes, the complaints have increased with the number or rows and the
number of users accessing the DB.
The problem record looks like this.
BarCode: W205179
PartNumber: 380-013
LRU: 380-013
PartsListRev
2011/1/3 Tom Lane t...@sss.pgh.pa.us:
select refobjid ... where objid matches and deptype = 'i'
then it'd be easy, but you only get one UNION ALL per recursive query.
Ah, I see! Thanks for explaining. Now I get it.
--
Best regards,
Joel Jacobson
Glue Finance
--
Sent via
Hi all
I have a PostgreSQL 9.0.1 instance, with WAL Archiving.
Today, after some failed tries to archive a WAL file, it stopped trying
to archive the files,
but the number of logfiles in the pg_xlog directory keep growing.
Any ideas of what is going on?
Norberto
--
Sent via pgsql-general
On Sun, 2011-01-02 at 10:31 +0100, Dick Kniep wrote:
Hi list,
Thanks for the clear answer. However, this is the simple answer that
is also in the manual. Yes I know it is not directly possible to get
that data, but I am quite desparate to get the data back. If one way
or another the data is
Le dimanche 02 janvier 2011 à 01:31 -0700, Karen Springer a écrit :
We are using PostgreSQL 8.1.4 on Red Hat, Microsoft Access 2002
That is one of the worst versions of Access ever. Lots of bugs.
Do try an other version (2K, 2003 are much better) and see if the
problem persists.
--
Vincent
On 3 January 2011 11:22, Thomas Schmidt postg...@stephan.homeunix.net wrote:
Thus - do you have any clue on designing an fast bulk-import for staging
data?
As you're talking about STDIN ... have you considered
piping the input-data through awk or sed to achieve a
pre-populated empty meta data
archiver process will retry later; it never stops trying, sleep time
is just longer.
2011/1/3, Norberto Delle betode...@gmail.com:
Hi all
I have a PostgreSQL 9.0.1 instance, with WAL Archiving.
Today, after some failed tries to archive a WAL file, it stopped trying
to archive the files,
but
On 01/03/2011 12:11 PM, Andre Lopes wrote:
[snip]
The problem with this task is that the information is not linear, if I
try to design tables with fields for all possible data I will end up
with many row fields with NULL values. There are any problem with
this(end up with many row fields
Andre,
From a distant view of your problem I would like to vote for Thomas
Kellerer's proposal:
Maintain only the data you need (to enhance import/sync performance)
and use the hstore data type (as long as query performance is ok).
Yours, S.
2011/1/3 Fredric Fredricson
On 2011-01-03, Alexander Farber alexander.far...@gmail.com wrote:
Hello,
through some obscure error (probably on my side)
Column | Type |Modifiers
+---+-
id | character varying(32) |
This explains my problem, thanks!
On Mon, Jan 3, 2011 at 7:52 PM, Jasen Betts ja...@xnet.co.nz wrote:
On 2011-01-03, Alexander Farber alexander.far...@gmail.com wrote:
through some obscure error (probably on my side)
Column | Type | Modifiers
I have a JDBC-based application which passes date/time parameters using JDBC
query parameters, which is performing very badly (ie. doing full table scans).
In an effort to try to narrow down the problem, I am taking the query and
running it in interactive SQL mode, but changing the date
On Mon, Jan 3, 2011 at 2:48 PM, Kurt Westerfeld kwesterf...@novell.com wrote:
I have a JDBC-based application which passes date/time parameters using JDBC
query parameters, which is performing very badly (ie. doing full table
scans). In an effort to try to narrow down the problem, I am taking
On 01/02/2011 11:19 PM, Dick Kniep wrote:
Hi list,
Thanks for the clear answer. However, this is the simple answer that is also in
the manual. Yes I know it is not directly possible to get that data, but I am
quite desparate to get the data back. If one way or another the data is (except
for
My application creates/uses a temporary table X via multiple
connections at the same time. Is there a way to determine which
pg_temp_N belongs to the current connection?
I need this to obtain list of attributes for the temporary table...
All connections are using the same temp table name (by
Konstantin Izmailov pgf...@gmail.com writes:
My application creates/uses a temporary table X via multiple
connections at the same time. Is there a way to determine which
pg_temp_N belongs to the current connection?
It seems unlikely that you need to determine that explicitly to solve
your
Tom, thank you for the suggestion - it looks like it is working!
I've found another solution by looking into psql source code:
nspname like 'pg_temp%' AND pg_catalog.pg_table_is_visible(C.oid)
can also be added to the query for the purpose.
On 1/3/11, Tom Lane t...@sss.pgh.pa.us wrote:
I am using Postgresql 9.0.1.
I want to know which one is good regarding VACUUM - Routine VACUUM manualy
or AutoVacuum.
Recently, I found in my production that some tables were not vacuumed for a
good period of time when the autovacuum was enabled and the DB was slow. I
vacuumed the DB manually
On 01/04/2011 04:45 PM, AI Rumman wrote:
I am using Postgresql 9.0.1.
I want to know which one is good regarding VACUUM - Routine VACUUM
manualy or AutoVacuum.
Recently, I found in my production that some tables were not vacuumed
for a good period of time when the autovacuum was enabled and
On 3 Jan 2011, at 23:48, Kurt Westerfeld wrote:
I have a JDBC-based application which passes date/time parameters using JDBC
query parameters, which is performing very badly (ie. doing full table
scans). In an effort to try to narrow down the problem, I am taking the
query and running it
43 matches
Mail list logo