I just finished converting and loading the US census data into PostgreSQL
would anyone be interested in it for testing purposes?
It's a *LOT* of data (about 40+ Gig in PostgreSQL)
---(end of broadcast)---
TIP 6: explain analyze is your friend
-0400, Mark Woodward wrote:
I just finished converting and loading the US census data into
PostgreSQL
would anyone be interested in it for testing purposes?
It's a *LOT* of data (about 40+ Gig in PostgreSQL)
Sure. Got a torrent?
How big is it when dumped and compressed?
cheers
andrew
I was reminded again today of the problem that once a database has been
in existence long enough for the OID counter to wrap around, people will
get occasional errors due to OID collisions, eg
http://archives.postgresql.org/pgsql-general/2005-08/msg00172.php
Getting rid of OID usage in user
* Mark Woodward ([EMAIL PROTECTED]) wrote:
I just finished converting and loading the US census data into
PostgreSQL
would anyone be interested in it for testing purposes?
It's a *LOT* of data (about 40+ Gig in PostgreSQL)
How big dumped compressed? I may be able to host it depending
* Mark Woodward ([EMAIL PROTECTED]) wrote:
How big dumped compressed? I may be able to host it depending on
how
big it ends up being...
It's been running for about an hour now, and it is up to 3.3G.
Not too bad. I had 2003 (iirc) loaded into 7.4 at one point.
Cool.
pg_dump tiger
It's been running for about an hour now, and it is up to 3.3G.
pg_dump tiger | gzip tiger.pgz
| bzip2 tiger.sql.bz2 :)
I find bzip2 FAR SLOWER than the gain in compression.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
I haven't seen this option, and does anyone thing it is a good idea?
A option to pg_dump and maybe pg_dump all, that dumps only the table
declarations and the data. No owners, tablespace, nothing.
This, I think, would allow more generic PostgreSQL data transfers.
---(end
Mark Woodward [EMAIL PROTECTED] writes:
Why is there collision? It is because the number range of an OID is
currently smaller than the possible usage.
Expanding OIDs to 64 bits is not really an attractive answer, on several
grounds:
1. Disk space.
I don't really see this as a problem
Mark Woodward [EMAIL PROTECTED] writes:
2. Performance. Doing this would require widening Datum to 64 bits,
which is a system-wide performance hit on 32-bit machines.
Do you really think it would make a measurable difference, more so than
your proposed solution? (I'm skeptical it would
It is 4.4G in space in a gzip package.
I'll mail a DVD to two people who promise to host it for Hackers.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Am Donnerstag, den 04.08.2005, 10:26 -0400 schrieb Mark Woodward:
I haven't seen this option, and does anyone thing it is a good idea?
A option to pg_dump and maybe pg_dump all, that dumps only the table
declarations and the data. No owners, tablespace, nothing.
This, I think, would allow
Mark Woodward [EMAIL PROTECTED] writes:
I'm too lazy to run an experiment, but I believe it would. Datum is
involved in almost every function-call API in the backend. In
particular this means that it would affect performance-critical code
paths.
I hear you on the lazy part, but if OID
Mark Woodward [EMAIL PROTECTED] writes:
Actually, there isn't a setting to just dump the able definitions and
the
data. When you dump the schema, it includes all the tablespaces,
namespaces, owners, etc.
Just the table nd object declarations and data would be useful.
pg_dump -t table ?
I
-formatted database?
I would say the preformated database is easier to manage. There are
hundreds of individual zips files, in each of those files 10 or so data
files.
Mark Woodward wrote:
It is 4.4G in space in a gzip package.
I'll mail a DVD to two people who promise to host it for Hackers
Hello,
As I have been laboring over the documentation of the postgresql.conf
file for 8.1dev it seems that it may be useful to rip out most of the
options in this file?
Considering many of the options can already be altered using SET why
not make it the default for many of them?
Well, if you want PostgreSQL to act a specific way, then you are going
to
have to set up the defaults somehow, right?
Of course, which is why we could use a global table for most of it.
What if you wish to start the same database cluster with different settings?
Which is cleaner? Using a
Christopher Kings-Lynne wrote:
I really don't intend to do that, and it does seem to happen a lot. I
am
the first to admit I lack tact, but often times I view the decisions
made
as rather arbitrary and lacking a larger perspective, but that is a
rant I
don't want to get right now.
I'm currently adding support for the v3 protocol in PHP pgsql extension.
I'm wondering if anyone minds if I lift documentation wholesale from
the PostgreSQL docs for the PHP docs for these functions. For instance,
the fieldcodes allowed for PQresultErrorField, docs on
PQtransactionStatus,
Mark Woodward wrote:
Christopher Kings-Lynne wrote:
I really don't intend to do that, and it does seem to happen a lot.
I
am
the first to admit I lack tact, but often times I view the
decisions
made
as rather arbitrary and lacking a larger perspective, but that is a
rant I
Uh, but that's what the BSD license allows --- relicensing as any other
license, including commercial.
The point remains that Chris, by himself, does not hold the copyright on
the PG docs and therefore cannot assign it to anyone.
ISTM the PHP guys are essentially saying that they will only
Tom Lane wrote:
You can't just randomly rearrange the pg_enc enum without forcing an
initdb, because the numeric values of the encodings appear in system
catalogs (eg pg_conversion).
Oh, those numbers appear in the catalogs? I didn't relealize that.
I will force an initdb.
Doesn't that
Mark Woodward wrote:
I would say that The PostgreSQL Global Development Group or its
representatives (I'm assuming Tom, Bruce, and/or Marc Fournier) just
has to give something written, that says Christopher Kings-Lynne of
your address, city, country, etc has the right to re-license
Peter Eisentraut wrote:
Mark Woodward wrote:
I would say that The PostgreSQL Global Development Group or its
representatives (I'm assuming Tom, Bruce, and/or Marc Fournier) just
has to give something written, that says Christopher Kings-Lynne of
your address, city, country, etc has
Mark Woodward wrote:
As the copyright owner, The PostgreSQL Global Development Group,
has the right to license the documentation any way they see fit. For
PHP to sub-license the documentation, it legally has to be transfered
in writing. Verbal agreements are not valid.
The PostgreSQL Global
Mark Woodward [EMAIL PROTECTED] writes:
Sorry, that's not true. At least in the USA, any entity that can be
identified can own and control copyright. While it is true, however,
that
there can be ambiguity, an informal body, say anarchists for stronger
government, without charter
Hi there,
while learning inkscape I did a sketch of picture describing
history of relational databases. It's available from
http://mira.sai.msu.su/~megera/pgsql/
Is there a direct line from INGRES to Postgres? I was under the impression
that Postgres is a new lineage started after INGRES
On Mon, 28 Mar 2005, Mark Woodward wrote:
Hi there,
while learning inkscape I did a sketch of picture describing
history of relational databases. It's available from
http://mira.sai.msu.su/~megera/pgsql/
Is there a direct line from INGRES to Postgres? I was under the
impression
There is an updated survey of open source developers:
http://flosspols.org/survey/survey_part.php?groupid=sd
It was very long, it says 45 questions, but many of those questions are
many parts with drop down menues.
Tedious!!
Also, it seems to be looking for sexual harrasment issues as well.
-Original Message-
From: Marian POPESCU [mailto:[EMAIL PROTECTED]
Sent: Friday, April 01, 2005 8:06 AM
To: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] ARC patent
Neil Conway [EMAIL PROTECTED] writes:
FYI, IBM has applied for a patent on ARC (AFAICS the
patent
I have a fairly simple extension I want to add to contrib. It is an XML
parser that is designed to work with a specific dialect.
I have a PHP extension called xmldbx, it allows the PHP system to
serialize its web session data to an XML stream. (or just serialize
variables) PHP's normal serializer
[removing -patches since no patch was attached]
This sounds highly specialised, and probably more appropriate for a
pgfoundry project.
In any case, surely the whole point about XML is that you shouldn't need
to contruct custom parsers. Should we include a specialised parser for
evey XML
Mark Woodward wrote:
XML is not really much more than a language, it says virtually nothing
about content. Content requires custom parsers.
Really? Strange I've been dealing with it all this time without having
to contruct a parser. What you do need is to provide event handlers
David Fetter [EMAIL PROTECTED] writes:
aolI also think this would make a great pgfoundry project :)/aol
Yeah ... unless there's some reason that it needs to be tied to PG
server releases, it's better to put it on pgfoundry where you can
have your own release cycle.
I don't need pfoundry,
On Sun, Jan 29, 2006 at 03:15:06PM -0500, Mark Woodward wrote:
Postgres generally seems to favor extensibility over integration, and
I
generally agree with that approach.
I generally agree as well, but.
I think there is always a balance between out of the box vs
extensibility. I
On Mon, Jan 30, 2006 at 04:35:15PM -0500, Mark Woodward wrote:
It gets so frustrating sometimes, it isn't so black and white, there are
many levels of gray. The PostgreSQL project is trying so hard to be
neutral, that it is making itself irrelevant.
Designing and including features
On Mon, 30 Jan 2006, Mark Woodward wrote:
It gets so frustrating sometimes, it isn't so black and white, there are
many levels of gray. The PostgreSQL project is trying so hard to be
neutral, that it is making itself irrelevant.
We are making ourselves irrelevant because we encourage
I am working on an issue that I deal with a lot, there is of course a
standard answer, but maybe it is something to think about for PostgreSQL
9.0 or something. I think I finally understand what I have been fighting
for a number of years. When I have been grousing about postgresql
configuration,
Mark Woodward [EMAIL PROTECTED] writes:
One of the problems with the current PostgreSQL design is that all the
databases operated by one postmaster server process are interlinked at
some core level. They all share the same system tables. If one database
becomes corrupt because of disk
On Thu, 2 Feb 2006, Mark Woodward wrote:
Now, the answer, obviously, is to create multiple postgresql database
clusters and run postmaster for each logical group of databases, right?
That really is a fine idea, but
Say, in pgsql, I do this: \c newdb It will only find the database
Mark Woodward wrote:
From an administration perspective, a single point of admin would
seem like a logical and valuable objective, no?
I don't understand why you are going out of your way to separate your
databases (for misinformed reasons, it appears) and then want to design
a way
Mark Woodward schrieb:
...
Unless you can tell me how to insert live data and indexes to a cluster
without having to reload the data and recreate the indexes, then I
hardly
think I am misinformed. The ad hominem attack wasn't nessisary.
I see you had a usecase for something like pg_diff
Mark Woodward [EMAIL PROTECTED] writes:
The point is, that I have been working with this sort of use case for
a
number of years, and being able to represent multiple physical databases
as one logical db server would make life easier. It was a brainstorm I
had
while I was setting this sort
On Feb 3, 2006, at 12:43, Rick Gigger wrote:
If he had multiple ips couldn't he just make them all listen only
on one specific ip (instead of '*') and just use the default port?
Yeah, but the main idea here is that you could use ipfw to forward
connections *to other hosts* if you wanted to.
On Feb 3, 2006, at 6:47 AM, Chris Campbell wrote:
On Feb 3, 2006, at 08:05, Mark Woodward wrote:
Using the /etc/hosts file or DNS to maintain host locations for
is a
fairly common and well known practice, but there is no such
mechanism for
ports. The problem now becomes a code issue
Hi!!
I was just browsing the message and saw yours. I have actually written a
shared memory system for PostgreSQL.
I've done some basic bench testing, and it seems to work, but I haven't
given it the big QA push yet.
My company, Mohawk Software, is going to release a bunch of PostgreSQL
On Sun February 5 2006 16:16, Tom Lane wrote:
AFAICT the data structures you are worried about don't have any readily
predictable size, which means there is no good way to keep them in
shared memory --- we can't dynamically resize shared memory. So I think
storing the rules in a table and
On Mon February 6 2006 05:17, Mark Woodward wrote:
I posted some source to a shared memory sort of thing to the group, as
well as to you, I believe.
Indeed, and it looks rather interesting. I'll have a look through it
when
I
have a chance...
So, after more discussion
Hello,
Is there not some other alternative to pg_hba.conf?
I have the problem where the system administrators at our company
obviously have access to the whole filesystem, and our database records
needs to be hidden even from them.
If they have full access, then they have FULL access.
Q Beukes wrote:
Hello,
Is there not some other alternative to pg_hba.conf?
I have the problem where the system administrators at our company
obviously have access to the whole filesystem, and our database records
needs to be hidden even from them.
With pg_hba.conf that is not possible, as
PostgreSQL promptly uses all available memory for the query and
subsequently crashes.
I'm sure it can be corrected with a setting, but should it crash?
freedb=# create table ucode as select distinct ucode from cdtitles group
by ucode having count(ucode)1 ;
server closed the connection
More info: the machine has 512M RAM and 512M swap
Work mem is set to:work_mem = 1024
This should't have crashed, should it?
PostgreSQL promptly uses all available memory for the query and
subsequently crashes.
I'm sure it can be corrected with a setting, but should it crash?
freedb=#
Mark Woodward [EMAIL PROTECTED] writes:
PostgreSQL promptly uses all available memory for the query and
subsequently crashes.
I'll bet a nickel this is on a Linux machine with OOM kill enabled.
What does the postmaster log show --- or look in the kernel log to
see if it mentions anything
Mark Woodward [EMAIL PROTECTED] writes:
- HashAggregate (cost=106527.68..106528.68 rows=200
width=32)
Filter: (count(ucode) 1)
- Seq Scan on cdtitles (cost=0.00..96888.12
rows=1927912
width=32)
Well, shouldn't hash aggregate respect work memory
Mark Woodward [EMAIL PROTECTED] writes:
Still, I would say that is is extremly bad behavior for not having
stats, wouldn't you think?
Think of it as a kernel bug.
While I respect your viewpoint that the Linux kernel should not kill an
offending process if the system runs out of memory, I
Mark Woodward [EMAIL PROTECTED] writes:
I think it is still a bug. While it may manifest itself as a pg crash on
Linux because of a feature with which you have issue, the fact remains
that PG is exeeding its working memory limit.
The problem is that *we have no way to know what that limit
Mark Woodward [EMAIL PROTECTED] writes:
Again, regardless of OS used, hashagg will exceed working memory as
defined in postgresql.conf.
So? If you've got OOM kill enabled, it can zap a process whether it's
strictly adhered to work_mem or not. The OOM killer is entirely capable
of choosing
On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote:
Mark Woodward [EMAIL PROTECTED] writes:
Again, regardless of OS used, hashagg will exceed working memory as
defined in postgresql.conf.
So? If you've got OOM kill enabled, it can zap a process whether it's
strictly
Stephen Frost [EMAIL PROTECTED] writes:
* Tom Lane ([EMAIL PROTECTED]) wrote:
Greg Stark [EMAIL PROTECTED] writes:
It doesn't seem like a bad idea to have a max_memory parameter that
if a
backend ever exceeded it would immediately abort the current
transaction.
See ulimit (or
Martijn van Oosterhout kleptog@svana.org writes:
When people talk about disabling the OOM killer, it doesn't stop the
SIGKILL behaviour,
Yes it does, because the situation will never arise.
it just causes the kernel to return -ENOMEM for
malloc() much much earlier... (ie when you still
Rick Gigger [EMAIL PROTECTED] writes:
However if hashagg truly does not obey the limit that is supposed to
be imposed by work_mem then it really ought to be documented. Is
there a misunderstanding here and it really does obey it? Or is
hashagg an exception but the other work_mem associated
On Fri, Feb 10, 2006 at 09:57:12AM -0500, Mark Woodward wrote:
In most practical situations, I think
exceeding work_mem is really the best solution, as long as it's not
by more than 10x or 100x. It's when the estimate is off by many
orders of magnitude that you've got a problem. Running
I was think about how forgetting to run analyze while developing a table
loader program caused PostgreSQL to run away and use up all the memory.
Is there some way that postges or psql can know that it substantially
altered the database and run analyze?
I know this is a kind of stupid question,
Mark Woodward wrote:
I know this is a kind of stupid question, but postgresql does not
behave well when the system changes in a major way without at least
an analyze. There must be something that can be done to protect the
casual user (or busy sometimes absent minded developer) from
Mark Woodward wrote:
My question was based on an observation that ANALYZE and VACUUM are
nessisary, both for different reasons. The system or tools must be
able to detect substantial changes in the database and at least run
analyze if failing to do so would cause PostgreSQL to fail badly
On 2/11/06, Andrej Ricnik-Bay wrote:
Has anyone here seen this one before? Do the values
appear realistic?
http://www.sqlite.org/cvstrac/wiki?p=SpeedComparison
The values appear to originate from an intrsinsically flawed test setup.
Just take the first test. The database has to do 1000
I think we've talked about this a couple times over the years, but I'm not
sure it was resolved or not.
The message post about load testing and SQLite showed PostgreSQL poorly.
Yea, I know, it was the Windows port not being optimized, I can see that,
but it raises something else. A good set of
Added to TODO:
o Allow pg_hba.conf to specify host names along with IP addresses
Host name lookup could occur when the postmaster reads the
pg_hba.conf file, or when the backend starts. Another
solution would be to reverse lookup the connection IP and
Mark Woodward wrote:
Added to TODO:
o Allow pg_hba.conf to specify host names along with IP
addresses
Host name lookup could occur when the postmaster reads the
pg_hba.conf file, or when the backend starts. Another
solution would be to reverse lookup
If I am a road warrior I want to be able to connect, run my dynamic dns
client, and go.
HUPing the postmaster every 30 minutes sounds horrible, and won't work
for what strikes me as the scenario that needs this most. And we surely
aren't going to build TTL logic into postgres.
I repeat -
Mark Woodward wrote:
If I am a road warrior I want to be able to connect, run my dynamic dns
client, and go.
In your scenario of working as a road warrior, you are almost
certainly not going to be able to have a workable DNS host name unless
you
have a raw internet IP address. More than
On Fri, Feb 03, 2006 at 08:05:48AM -0500, Mark Woodward wrote:
Like I said, in this thread of posts, yes there are ways of doing this,
and I've been doing it for years. It is just one of the rough eges that
I
think could be smoother.
(in php)
pg_connect(dbname=geo host=dbserver);
Could
On Sun, Feb 19, 2006 at 10:00:01AM -0500, Mark Woodward wrote:
It turns out what you like actually exists, lookup the service
parameter in the connectdb string. It will read the values for the
server, port, etc from a pg_service.conf file.
There is an example in the tree but it looks
On Sun, 2006-02-19 at 10:00 -0500, Mark Woodward wrote:
On Fri, Feb 03, 2006 at 08:05:48AM -0500, Mark Woodward wrote:
Like I said, in this thread of posts, yes there are ways of doing
this,
and I've been doing it for years. It is just one of the rough eges
that
I
think could
Martijn van Oosterhout kleptog@svana.org writes:
I think the major issue is that most such systems (like RFC2782) deal
only with finding the hostname:port of the service and don't deal with
usernames/passwords/dbname. What we want is a system that not only
finds the service, but tells you
Mark Woodward wrote:
Don't get me wrong, DNS, as it is designed, is PERFECT for the
distributed nature of the internet, but replication of fairly static
data under the control of a central authority (the admin) is better.
What about this zeroconf/bonjour stuff? I'm not familiar
Mark Woodward [EMAIL PROTECTED] writes:
DNS isn't always a better solution than /etc/hosts, both have their pros
and cons. The /etc/hosts file is very useful for instantaneous,
reliable, and redundent name lookups. DNS services, espcially in a large
service environment can get bogged down
The pg_config program needs to display more information, specifically
where the location of pg_service.conf would reside.
Also, I know I've been harping on this for years (literally), but since
the PosgteSQL programs already have the notion that there is some static
directory for which to locate
Mark Woodward wrote:
The pg_config program needs to display more information, specifically
where the location of pg_service.conf would reside.
pg_config --sysconfdir
Hmm, that doesn't show up with pg_config --help.
[EMAIL PROTECTED]:~$ pg_config --sysconfdir
pg_config: invalid argument
Mark Woodward [EMAIL PROTECTED] writes:
pg_config --sysconfdir
Hmm, that doesn't show up with pg_config --help.
It's in 8.1.
One of my difficulties with PostgreSQL is that there is no
standardized
location for where everything is located, i.e. self documenting. If you
know that /usr
Mark Woodward wrote:
As a guy who administers a lot of systems, sometimes over the span of
years, I can not understate the need for a place for the admin to
find
what databases are on the machine and where they are located.
Your assertion that this file would only works for one root-made
Quoth [EMAIL PROTECTED] (Mark Woodward):
Mark Woodward wrote:
As a guy who administers a lot of systems, sometimes over the span of
years, I can not understate the need for a place for the admin to
find
what databases are on the machine and where they are located.
Your assertion
Mark Woodward wrote:
I'm not sure that I agree. At least in my experience, I wouldn't have
more
than one installation of PostgreSQL in a production machine. It is
potentially problematic.
I agree with you for production environments, but for development, test,
support (and pre-sales
Mark Woodward wrote:
If you require a policy, then YOU are free to choose the policy that
YOU need. You're not forced to accept other peoples' policies that
may conflict with things in your environment.
The problem is that there is no mechanism through which one can
implement
policy
On Mon, Feb 27, 2006 at 09:39:59AM -0500, Mark Woodward wrote:
It isn't just an environment variable, it is a number of variables and
a
mechanism. Besides, profile, from an admin's perspective, is for
managing users, not databases.
Sure, you need to control the user, group, placement
On Mon, Feb 27, 2006 at 11:48:50AM -0500, Mark Woodward wrote:
Well, I'm sure that one could use debian's solution, but that's the
problem, it isn't PostgreSQL's solution. Shouldn't PostgreSQL provide
the
mechanisms? Will debian support FreeBSD? NetBSD? Is it in the PostgreSQL
admin manual
Mark,
Well, I'm sure that one could use debian's solution, but that's the
problem, it isn't PostgreSQL's solution. Shouldn't PostgreSQL provide
the mechanisms? Will debian support FreeBSD? NetBSD? Is it in the
PostgreSQL admin manual?
We are talking about a feature, like pg_service.conf,
Mark Woodward [EMAIL PROTECTED] writes:
My frustration level often kills any desire to contribute to open
source.
Sometimes, I think that open source is doomed. The various projects I
track and use are very frustrating, they remind me of dysfunctional
engineering departments in huge
Mark Woodward wrote:
Mark,
Well, I'm sure that one could use debian's solution, but that's the
problem, it isn't PostgreSQL's solution. Shouldn't PostgreSQL provide
the mechanisms? Will debian support FreeBSD? NetBSD? Is it in the
PostgreSQL admin manual?
We are talking about
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Mark
Woodward) belched out:
Mark Woodward wrote:
Like I have repeated a number of times, sometimes, there is more than
one
database cluster on a machine. The proposed pg_clusters.conf, could look
like this:
pg_clusters.conf
Mark Woodward wrote:
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Mark
Woodward) belched out:
I'm not keen on the Windows .ini file style sectioning; that makes it
look like a mix between a shell script and something else. It should
be one or the other. It probably should
Sorry to interrupt, but I have had the opportinuty to have to work with
MySQL. This nice little gem is packed away in the reference for
mysql_use_result().
On the other hand, you shouldn't use mysql_use_result() if you are doing
a lot of processing for each row on the client side, or if the
Jim C. Nasby wrote:
On Wed, May 17, 2006 at 09:35:34PM -0400, John DeSoi wrote:
On May 17, 2006, at 8:08 PM, Mark Woodward wrote:
What is the best way to go about creating a plug and play,
PostgreSQL
replacement for MySQL? I think the biggest problem getting PostgreSQL
accepted is that so
Jim C. Nasby wrote:
Maybe a compatability layer isn't worth doing, but I certainly think
it's very much worthwhile for the community to do everything possible to
encourage migration from MySQL. We should be able to lay claim to most
advanced and most popular OSS database.
We'll do that by
Andrew Dunstan [EMAIL PROTECTED] writes:
Mark Woodward wrote:
Again, there is so much code for MySQL, a MySQL emulation layer, MEL
for
short, could allow plug and play compatibility for open source, and
closed
source, applications that otherwise would force a PostgreSQL user to
hold
his
Actually, I think it's a lot more accurate to compare PostgreSQL and
MySQL as FreeBSD vs Linux from about 5 years ago. Back then FreeBSD was
clearly superior from a technology standpoint, and clearly playing
second-fiddle when it came to users. And now, Linux is actually
technically superior
Mark Woodward wrote:
I have a side project that needs to intelligently know if two strings
are contextually similar. Think about how CDDB information is collected
and sorted. It isn't perfect, but there should be enough information to
be
usable.
Think about this:
pink floyd - dark side
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have a side project that needs to intelligently know if two strings
are contextually similar.
The examples you gave seem heavy on word order and whitespace
consideration,
before applying any algorithms. Here's a quick perl version that
What I was hoping someone had was a function that could find the substring
runs in something less than a strlen1*strlen2 number of operations and a
numerically sane way of representing the similarity or difference.
Acually, it is more like strlen1*strlen2*N, where N is the number of valid
My question is whether psql using libreadline.so has to be GPL, meaning
the psql source has to be included in a binary distribution.
If I understand what I have been told by lawyers, here's what using a GPL,
and NOT LGPL, library means:
According to RMS, the definition of a derivitive work is
On Fri, May 19, 2006 at 07:04:47PM -0400, Bruce Momjian wrote:
libreadline is not a problem because you can distribute postgresql
compiled with readline and comply with all licences involved
simultaneously. It doesn't work with openssl because the licence
requires things that are
1 - 100 of 193 matches
Mail list logo