John R Pierce wrote:
all the enterprise SAN guys I've talked with say the Intel x25 drives
are consumer junk, about the only thing they will use is STEC Zeus, and
even then they mirror them.
A couple of points there.
1) Mirroring flash drives is a bit ill advised since flash has a rather
pr
Alan McKay wrote:
On Thu, Mar 25, 2010 at 4:15 PM, Scott Marlowe wrote:
These questions always get the first question back, what are you
trying to accomplish? Different objectives will have different
answers.
We have a real-time application that processes data as it comes in.
Doing some simp
Hi,
Can anyone point me at a comprehensive example of statement (as opposed to
row) triggers? I've googled it and looked through the documentation, but
couldn't find a complete example relevant to what I'm trying to do.
Specifically, what features of the SQL statement that triggered the event
are
Scott Marlowe wrote:
On Jan 19, 2008 6:14 PM, Gordan Bobic <[EMAIL PROTECTED]> wrote:
Gregory Youngblood wrote:
On Sat, 2008-01-19 at 23:46 +, Gordan Bobic wrote:
David Fetter wrote:
In that case, use one of the existing solutions. They're all way
easier than re-inventin
Gregory Youngblood wrote:
On Sat, 2008-01-19 at 23:46 +, Gordan Bobic wrote:
David Fetter wrote:
> In that case, use one of the existing solutions. They're all way
> easier than re-inventing the wheel.
Existing solutions can't handle multiple masters. MySQL can do it at
David Fetter wrote:
That's just it - I don't think any user-land libraries would
actually be required. One of supposed big advantages of MySQL is
it's straightforward replication support. It's quite painful to
see PostgreSQL suffer purely for the sake of lack of marketting in
this department. :-
Andreas 'ads' Scherbaum wrote:
Have a plperl function that creates connections to all servers in the
cluster (replication partners), and issues the supplied write query to
them, possibly with a tag of some sort to indicated it is a replicated
query (to prevent circular replication).
Have a p
Andrew Sullivan wrote:
On Fri, Jan 18, 2008 at 04:09:45PM +, [EMAIL PROTECTED] wrote:
That's just it - I don't think any user-land libraries would actually be
required. One of supposed big advantages of MySQL is it's straightforward
replication support. It's quite painful to see PostgreSQL
Mike Rylander wrote:
On Wed, 30 Mar 2005 12:07:06 +0100, Gordan Bobic <[EMAIL PROTECTED]> wrote:
Hi,
How difficult is it to write a driver for pgsql (via network or UNIX
domain sockets) for an as yet unsupported language?
Specifically, I'd like a driver for JavaScript, for use with Mo
Hi,
How difficult is it to write a driver for pgsql (via network or UNIX
domain sockets) for an as yet unsupported language?
Specifically, I'd like a driver for JavaScript, for use with Mozilla
JSLib/XPCShell.
I presume there isn't one already, so I guess I'll have to write one.
So, where can
Csaba Nagy wrote:
DELETE FROM Temp1 WHERE Test = 'test3';
ERROR: syntax error at or near "$2" at character 44
QUERY: INSERT INTO Temp2 (ID, test) VALUES ( $1 $2 )
CONTEXT: PL/pgSQL function "temp1_trigger_delete" line 2 at SQL statement
LINE 1: INSERT INTO Temp2 (ID, test) VALUES ( $1 $2 )
Richard Huxton wrote:
Gordan Bobic wrote:
Hi,
I'm trying to figure out how to do this from the documentation, but I
can't figure it out. :-(
Here is what I'm trying to do:
CREATE TABLE MyTable
(
IDbigserial unique,
MyDatachar(255),
PRIMARY KEY (ID)
Hi,
I'm trying to figure out how to do this from the documentation, but I
can't figure it out. :-(
Here is what I'm trying to do:
CREATE TABLE MyTable
(
ID bigserial unique,
MyData char(255),
PRIMARY KEY (ID)
);
CREATE TABLE Archive_MyTable
(
ID bigseria
Hi.
After looking again at the other email I've sent earlier, I realized that it
goes on for far too long, so I'll try to summarize my question more briefly.
1) How can FTI be made to NOT break up words into sub-strings? Most of those
are likely to be useless in my particular application. In f
[Broken SQL instead of performance issue fixed]
It would appear that when I define the index on the FTI table (string and
oid) to be unique (which makes sense, since there is little point in having
duplicate rows in this case), a lot of inserts fail where they shouldn't. I
am guessing that if
On Monday 15 Oct 2001 13:35, Joseph Koenig wrote:
> Your solution sounds very interesting (Not the throw away NT
> part...)
That is where a signifficant part of the performance improvement would come
from, if performance was what you were after...
> ...does anyone else have any input on this? W
On 12 Oct 2001, Doug McNaught wrote:
> Joseph Koenig <[EMAIL PROTECTED]> writes:
>
> > I have a project where a client has products stored in a large Progress
> > DB on an NT server. The web server is a FreeBSD box though, and the
> > client wants to try to avoid the $5,500 license for the Unlimi
Or try:
http://pgreplicator.sourceforge.net/
Haven't used it myself yet, but it looks pretty good...
> > Now, erserver seems to work, but it needs a bit hacking around
that I
> > hadn't done yet. Maybe when I get it working I'll see to writing
> > something. In the mean time, source code is the
Not sure, but the syntax is as I described below. Try checking the
perl DBD::Pg documentation. I think that's where I read about it
originally, many moons ago.
> Just checked the Pg docs, don't see a quote function. What is it
part of?
>
>
> > Are you using the "quote" function? You have to use i
Are you using the "quote" function? You have to use it if you are to
guarantee that the data will be acceptable as "input".
$myVar = $myDB -> quote ($myVar)
> I'm using the Pg perl interface. But, think my problem was that I
had
> unescaped single quotes in the string. Added the following to my
> > As for those OS wars - are there any serious Linux sysadmins who
don't
> > have a copy of "Linux System Security" next to the server?
>
> I run Linux on everything, and I don't have a copy of that book next
to
> any of my machines. Then again, I don't run Redhat.
I don't have a copy of it ei
> > A note about SCSI vs IDE... I have recently tried both on a dual
P3 with
> > 1gb of ram running Mandrake 7.2. I was amazed the idle CPU's were
> > running near 20-23% with nothing other then a bash shell running
on 2
> > IBM IDE ATA 100 drives. I converted to 2 IBM SCSI U2 drives and
the idle
(Frequent Access)
If you just have lots of queries in parallel, try replication, and
pick a random server for each connection.
(Complex Queries)
If you absolutely, positively need one query to be executed across all
nodes in the cluster because one machine would just take too long no
matter how b
> > I am very new to PostgreSQL and have installed v7.03 on a Red Hat
Linux
> > Server (v6.2), I am accessing the files using JDBC from a Windows
2000 PC.
> >
> > I have created a small file as follows:
> > CREATE TABLE expafh (
> > postcode CHAR(8) NOT NULL,
> > postcode_record_no INT,
> > street
> It works quite well (designing a web-based system on it right now),
but
> because of a DDB::Pg limit, I can only get 8k into a 'text' field.
So if
> your app is web-based, you might want to not use perl...
Umm... I'm not sure what you're talking about here. I think you are
referring to the 8KB
>For one of our customer, we are running a PostgreSQL database on a
> dynamic PHP-driven site. This site has a minimum of 40 visitors at a
> time and must be responsive 24h a day.
And from the bandwidth and hit logs, they cannot determine a time of day
when there are hardly any hits? Possible
> > If I do a view that produces the data I want through joins, it takes
hours,
> > even with all fields indexed, and after VACUUM ANALYZE. Doing SET
ENABLE
> > SEQ_SCAN = OFF doesn't seem to make any difference. The query plan
changes,
> > but select times are still roughly the same... Doing the
> > SELECT * FROM Table1 INNER JOIN Table2 ON (Table1.Field1 =
> > Table2.Field1)
> > WHERE Table1.Field1 = 'SomeValue';
> > [ is slow, but this is fast: ]
> > SELECT * FROM Table1 INNER JOIN Table2 ON (Table1.Field1 =
> > Table2.Field1)
> > WHERE Table1.Field1 = 'SomeValue' AND Table2.Field1 = 'S
I am not sure if this is a bug, an oversight or something else entirely,
but it would appear that if there are two tables, Table1 and Table2, which
are joined using INNER JOIN, specifying WHERE = one of the join fields
doesn't automatically get equalised to the other field.
For example:
SELECT
Is there a way to tune queries?
I'm doing queries that join around 5-6 tables. All join fields are indexed
either in hash (where tables are small enough and join is done on "="), or
btree (big tables, not joined on "="). The tables have between several
hundred and several tens of millions of reco
Hi.
I have just upgraded from v7.0.3 to v7.1b3, and one of the things I am
noticing is that doing a max() query search seems to take forever.
For example, if I have a view like:
CREATE VIEW LastDate AS
SELECTCompany,
max(Date) AS Date
FROMPastInvoices
> > [...]
> > Isn't this just as bad? If you store the encrypted password, that
doesn't
> > help you in the slightest in this case, because if you can breach the
list
> > of encrypted passwords, you still know what you need to send as the
> > "password" from the front end to let you into the data
> Advanced tools do have advanced safety features, but are sold "ready
> for most use", not "safely disabled until you read all of the manuals
> so you can figure out how to make it work decently". I agree that
> reading the manuals is an important part of learning a new tool,
> but it shouldn't b
> I usually just run 'crypt()' on the clear text before storing it to the
> backend ...
Isn't this just as bad? If you store the encrypted password, that doesn't
help you in the slightest in this case, because if you can breach the list
of encrypted passwords, you still know what you need to send
[tuning analogies snipped]
>
> Likewise with self-proclaimed computer tuners.
You have no idea how much I agree with you there.
> > I really don't understand why people expect computers to do everything
> > for them, the burden of using tools properly belongs to the user.
>
> I of course agree i
> >>>Actually, if he ran Postgresql with WAL enabled, fsync shouldn't
> >>>make much of a difference.
>
> WAL seems to be enabled by default. What WAL is good for I do not know.
But
> if I start PostgreSQL without the -S I see a lot of info about WAL this
and
> WAL that.
You seem to be too hung u
> As for the drive in that machine, doing inserts on it was SLOW.
Slower
> even than on our beater development machine. I suppose I could have
fiddled
> with hdparm to increase the disk I/O, but that would have been a
temporary
> fix at best. Our CGI applications were eating lots of CPU time,
Have the RPMs been published yet? I seem to remember somebody saying that
they should be on the web site by the last weekend, but I can't find them.
A link would be appreciated... I need some of the new features, but I'd
rather avoid working out all the strange file locations (i.e. not
/usr/local)
> > Than you can connect to any of the postgres on your cluster, for
> >example: > round robin.
> >
> >Hmm... But is this really what we want to do? This is less than ideal
for
> >several reasons (if I understand what you're saying correctly).
Replication
> >is off-line for a start, and it only wo
> > > I am considering splitting the database into tables residing on
separate
> > > machines, and connect them on one master node.
> > >
> > > The question I have is:
> > >
> > > 1) How can I do this using PostgreSQL?
> >
> > You can't.
>
>I'll jump in with a bit more info. Splitting tables
Sorry for replying to my own email, but I've just stumbled upon an article
that seems to imply that v7.1 will support unlimited record lengths. Is
this the case? When is v7.1 due for release? Is a beta available?
Thanks.
Gordan
- Original Message -
From: "Gordan Bobi
Hi!
I've got a bit of a problem with tuple size limits. They seem to be limited
to 8K, which is not enough for me, in the current application. Ideally, I'd
like to bump this up to at least 256K, although 512K would be nice. This is
for the purpose of storing large text fields, although it's usefu
42 matches
Mail list logo