On Wed, 15 Apr 2009 15:32:42 +0100, Shug Boabby wrote:
I simplified my problem a little too much and now I'm stuck trying to
use cumulative_sum(). My schema is not only A, B but also has a C
A B C
1 0 1
2 1 1
3 0 1
4 2 1
5 1 1
1 0 2
2 1 2
3 0 2
4 2 2
5 1 2
and I want to be able
Joshua D. Drake wrote:
Question 1 is wrong, because Npgsql is no commercial .NET data provider.
That's the main advantage: it is open source.
This is actually a misconception. Open Source doesn't disqualify it as
commercial. It disqualifies it as proprietary. I can make money
providing
On 16/04/2009 01:59, mrLami wrote:
Find server log message below
http://www.nabble.com/file/p23069775/pglog.txt pglog.txt
Please paste log extracts, error messages, etc, into the email itself
rather than just links to them: it makes the thread easier for others to
follow, and in particular
Grzegorz Jaśkiewicz wrote:
On Wed, Apr 15, 2009 at 1:25 PM, Simon Riggs si...@2ndquadrant.com wrote:
The cost of the query seems accurate, so the absence of
attachment_isins_attachment_idx on the 8.3 plan looks to be the reason.
There's no way it would choose to scan 8115133 rows on the pkey
On Thu, Apr 16, 2009 at 01:31:45PM +0200, Christian Schröder wrote:
Stupid question: Do I have to analyze again or perform a reindex after
adding the index?
No, it's a regression in PG's handling of outer joins---it used to
realise that this was a possible optimisation, but now it doesn't.
Hi all,
when i was executing this command : osm2pgsql --slim -v -d gis /home/
ahmed/GreatLakes.osm.gz
i had obtained this error :
failed: ERREUR: VACUUM ne peut pas être exécuté à partir d'une
fonction ou d'une chaîne
contenant plusieurs commandes ( VACUUM cannot be executed from a
function or
In response to ahmed soua :
Hi all,
when i was executing this command : osm2pgsql --slim -v -d gis /home/
ahmed/GreatLakes.osm.gz
i had obtained this error :
failed: ERREUR: VACUUM ne peut pas être exécuté à partir d'une
fonction ou d'une chaîne
contenant plusieurs commandes ( VACUUM
On Thursday 16 April 2009 2:04:19 am SongDongyan wrote:
Hi,
Thank you for your suggestion, I set my listen_address = ' * ', and I want
to connect to local server.
After I modified listen_address from localhost to *, the error message left
to one line:
Hi,
Thank you for your suggestion, I set my listen_address = ' * ', and I want
to connect to local server.
After I modified listen_address from localhost to *, the error message left
to one line:
FATAL:
Thanks you all for your helps. Finally I have updated to 8.3.7 version and
and able to get it worked with openldap with the following directives as you
suggested:
host all all 202.18.10.0/24 ldap
ldap://202.18.10.1:389/ou=People,dc=example,dc=com;uid=;,ou=People,dc=example,dc=com;
--
View
Hi,
Thank you for your suggestion, I set my listen_address = ' * ', and I want to
connect to local server.
After I modified listen_address from localhost to *, the error message left to
one line:
FATAL:
Hi,
why does the statement take so long? The column 'lieferant' is indexed. But
a sequential scan gets done.
foo_egs_foo=# explain analyze SELECT DISTINCT foo_abc_abc.lieferant FROM
foo_abc_abc;
QUERY PLAN
wasn't that improved now in 8.4, and before that on 8.3 ?
still there are some funny things with distinct/group by .
for instance, try
select count(distinct foo) from bar; vs
select count(1) from (select distinct foo from bar) f;
:)
I am not sure it pg was able to use index for that in 8.2.
--
Thomas Guettler h...@tbz-pariv.de writes:
why does the statement take so long? The column 'lieferant' is indexed. But
a sequential scan gets done.
It might have something to do with the fact that the planner's idea of
the size of the table is off by a factor of more than 100:
- Seq
On Thu, Apr 16, 2009 at 11:29:25AM -0400, Tom Lane wrote:
, a full table indexscan isn't going to be particularly fast in
any case; it's often the case that seqscan-and-sort is the right
decision.
Is PG capable of skipping over duplicate values using an index?
For example, if I've got a table
Sam Mason s...@samason.me.uk writes:
On Thu, Apr 16, 2009 at 11:29:25AM -0400, Tom Lane wrote:
, a full table indexscan isn't going to be particularly fast in
any case; it's often the case that seqscan-and-sort is the right
decision.
Is PG capable of skipping over duplicate values using an
Gente, buenos dias.
Como consulto lo triggers desde psql?
Gracias!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Thu, Apr 16, 2009 at 11:22:46AM +0200, Albe Laurenz wrote:
Joshua D. Drake wrote:
Question 1 is wrong, because Npgsql is no commercial .NET data provider.
That's the main advantage: it is open source.
This is actually a misconception. Open Source doesn't disqualify it as
commercial.
Hi,
Thank you for all suggestions, I fixed this problem by change the owner of
/tmp/.s.PGSQL.5432.lock file.
# chown postgres /tmp/.s.PGSQL.5432.lock
and it is done.
Best wishes,
Dongyan
Adrian Klaver wrote:
On Thursday 16 April 2009 2:04:19 am SongDongyan wrote:
Hi,
Thank you for
On Apr 15, 5:50 pm, ray ray.jos...@cdicorp.com wrote:
On Apr 15, 4:17 pm, ray ray.jos...@cdicorp.com wrote:
I am trying to re-installed PostgreSQL. I have chosen a different
process. I am installing it as a service. After much time at the
'Starting Services' window, it says that I have
What are folks doing to protect sensitive data in their databases?
We're running on the assumption that the _really_ sensitive data
is too sensitive for us to just trust the front-end programs that
connect to it.
The decision coming down from on-high is that we need to encrypt
certain fields.
Bill Moran wrote on 16.04.2009 21:40:
The goal here is that if we're going to encrypt the data, it should
be encrypted in such a way that if an attacker gets ahold of a dump
of the database, they still can't access the data without the
passphrases of the individuals who entered the data.
I'm
Bill Moran wrote:
What are folks doing to protect sensitive data in their databases?
I would probably do my encryption in the application layer, and only
encrypt the sensitive fields. fields used as indexes probably should
not be encrypted, unless the only index operation is EQ/NE, then
In response to Thomas Kellerer spam_ea...@gmx.net:
Bill Moran wrote on 16.04.2009 21:40:
The goal here is that if we're going to encrypt the data, it should
be encrypted in such a way that if an attacker gets ahold of a dump
of the database, they still can't access the data without the
I need to normalize a column for search purposes by stripping all
non-alphanumeric characters:
UPDATE my-table SET id_stripped = ??? id;
I have been playing with regexp_replace( id, ,'');
UPDATE my-table
SET id_stripped = regexp_replace( id, various,'');id;
Without much luck.
On Apr 16, 2009, at 12:40 PM, Bill Moran wrote:
(This is the traditional you're asking the wrong question response).
What are folks doing to protect sensitive data in their databases?
I don't think that's a useful way to look at it. Protecting sensitive
data in the entire system, where the
I wrote:
=?ISO-8859-1?Q?Christian_Schr=F6der?= c...@deriva.de writes:
This is the query:
select isin from ts_frontend.attachment_isins full OUTER JOIN
ts_frontend.rec_isins using (attachment,isin) WHERE attachment=2698120
GROUP BY isin limit 1000;
Hmm. It seems 8.3 is failing to push
On Thu, April 16, 2009 13:20, Bill Moran wrote:
In response to Thomas Kellerer spam_ea...@gmx.net:
Bill Moran wrote on 16.04.2009 21:40:
The goal here is that if we're going to encrypt the data, it should
be encrypted in such a way that if an attacker gets ahold of a dump
of the
Bill Moran wrote on 16.04.2009 22:20:
I'm by far not an expert, but my naive attempt would be to store the the
database files in an encrypted filesystem.
That was the first suggestion when we started brainstorming ideas.
Unfortunately, it fails to protect us from the most likely attack
vector:
Kenneth Tilton kentil...@gmail.com writes:
I need to normalize a column for search purposes by stripping all
non-alphanumeric characters:
Without much luck. Can this even be done with regex, or should I just
write a custom sql function?
Perhaps along the lines of
regression=# select
In response to Thomas Kellerer spam_ea...@gmx.net:
Bill Moran wrote on 16.04.2009 22:20:
I'm by far not an expert, but my naive attempt would be to store the the
database files in an encrypted filesystem.
That was the first suggestion when we started brainstorming ideas.
In response to Steve Atkins st...@blighty.com:
On Apr 16, 2009, at 12:40 PM, Bill Moran wrote:
(This is the traditional you're asking the wrong question response).
What are folks doing to protect sensitive data in their databases?
I don't think that's a useful way to look at it.
Bill Moran wrote on 16.04.2009 23:06:
which only talks about someone getting hold of the contents of the server's
harddisk.
Not really. You're making an assumption that a pg_dump can only be
run on the server itself.
Right, I forgot that.
But then it's similar to the situation where the
Couldn't you just add a PGP based column (or similar encryption
protocol) for authentication? This would protect you against injection
attacks, would it not?
You could also use PGP or similar for key management if I'm not
mistaken.
-Will
-Original Message-
In response to Thomas
Bill Moran wrote:
The problem comes when the company head wants to search through the
database to find out which employee has a specific SSN. He should
be able to do so, since he has access to everything, but the logistics of
doing so in a reasonable amount of time are rather complex and very
If the purpose of encrypting the data is just to keep prying eyes from
decerning what that data is then a simple encryption can be coded. something
like adding 128 or 256, depending on the character set, to each of the
chr(value) for each of the characters in the string should work just fine.
i have table, like so:
group.group_name (varchar) | group.group_array (varchar[])
-
west coast | {CA,WA}
east coast | {NY,MA}
i can do this:
select group_name from group where 'CA' = any(array['CA','WA']);
but i need to select group_name where state_abbreviation is in
group_array, something
zach cruise zachc1...@gmail.com writes:
but i need to select group_name where state_abbreviation is in
group_array, something like:
select group_name from group where 'CA' = any(group_array);
Yeah? What's the problem?
regression=# create table g (group_name text, group_array text[]);
CREATE
The LFNW is coming up next weekend.
Are there any Seattle/Bellingham-area folks attending the conference?
Could you spare a few hours to staff the booth? We're looking for people
to staff the booth all day Saturday (9:30-5pm) and Sunday (9:30-4pm).
You'll get a PostgreSQL T-shirt for your
On Thu Apr 16 05:06 PM, Bill Moran wrote:
The problem comes when the company head wants to search through the
database to find out which employee has a specific SSN. He should be
able to do so, since he has access to everything, but the logistics of
doing so in a reasonable amount of
I'm attempting to remove constraints (primary/foreign keys) ahead of a
massive import. Once the data has been imported, I would like to
regenerate the constraints I removed. This process is recommended in
PostgreSQL's documentation and incidentally would make import much
more tenable.
However,
On Thu, Apr 16, 2009 at 02:12, Peeyush peeyush_j...@persistent.co.in wrote:
Sorry missed the attachment L
No problem.
Thanks for the document with feature list of dotConnect.
I'll create one for Npgsql which can give more information. Npgsql is
missing a list like that for a long time.
Eric Soroos wrote:
an index on the encrypted SSN field would do this just fine. if
authorized person needs to find the record with a specific SSN, they
encrypt that SSN and then look up the ciphertext in the database...
done.
This will only work for e(lectronic?) code book ciphers,
Thomas Kellerer spam_ea...@gmx.net wrote:
Bill Moran wrote on 16.04.2009 23:06:
which only talks about someone getting hold of the contents of the
server's
harddisk.
Not really. You're making an assumption that a pg_dump can only be
run on the server itself.
Right, I forgot
That's where we're having difficulty. Our requirements are that the
data must be strongly protected, but the appropriate people must be
able to do (often complex) searches on it that complete in record
time.
an index on the encrypted SSN field would do this just fine. if
authorized
Will Rutherdale (rutherw) ruth...@cisco.com wrote:
Couldn't you just add a PGP based column (or similar encryption
protocol) for authentication? This would protect you against injection
attacks, would it not?
You could also use PGP or similar for key management if I'm not
mistaken.
Michael Black michaelblack75...@hotmail.com wrote:
If the purpose of encryption is for financial or medica data transmission
security, or something of a higher order, you may want to implement a
stronger type of security such as SSL or PGP or some other type of
public/private key process.
John R Pierce pie...@hogranch.com wrote:
Eric Soroos wrote:
an index on the encrypted SSN field would do this just fine. if
authorized person needs to find the record with a specific SSN, they
encrypt that SSN and then look up the ciphertext in the database...
done.
This will
Jonathan Bond-Caron jbo...@openmv.com wrote:
On Thu Apr 16 05:06 PM, Bill Moran wrote:
The problem comes when the company head wants to search through the
database to find out which employee has a specific SSN. He should be
able to do so, since he has access to everything, but the
I want to be able to search a list of articles for title words as well as
author names. I understand how to do the title words with the full text
searching. But I'm not sure the best strategy for the names. The full text
parser parses the names giving undesirable results.
For example,
select
Hi
I have table A with pk that is being referenced by 5 other tables as foreign
key
I have put all those tables reference to be ON DELETE CASCADE
However for some reason when I tried to delete one of the row in table A,
out of 5 tables, 1 of them didn't actually get deleted
but the transaction is
Hello ,
Does anyone use PostgreSQL(windows version)+Hibernate with middlegen to
generate *.hbm.xml?
I used to use OracleXE as my application database. For some reason,I will
try to use PostgreSQL.
I have install PostgreSQL 8.3.7 windows version.
But I found middlegen aren't able to generate
rolandpeng wrote:
How do you resolve this problem? or If you use postgreSQL+Hibernate3,how do
you maintain your *.hbm.xml and entities?
I use Hibernate 3 with PostgreSQL via JPA (Hibernate EntityManager). I
maintain my entity definitions by hand.
I find generated solutions to generally be
Yes,I'm totally agree about this comments.
But as you're mentioned,I'd like to build all the hbm.xml at first run.
I have built more than 40 tables with complicated relations among those.
So if middlegen could help me to complete the template work at first
generation would be very convenience.
Rick Schumeyer wrote:
I want to be able to search a list of articles for title words as well
as author names I'm not sure the best strategy for the names. The
full text parser parses the names giving undesirable results.
For example,
select to_tsvector('claude Jones');
rolandpeng wrote:
Yes,I'm totally agree about this comments.
But as you're mentioned,I'd like to build all the hbm.xml at first run.
I have built more than 40 tables with complicated relations among those.
So if middlegen could help me to complete the template work at first
generation would
Hello
I am using pgplus 8.3 in linux(centos) and trying to backup the database
from windows machine using pgadmin 1.8.4 but i get an error while backing
up. It says version mismatched and use -i option if I want to proceed.
If anybody knows what going on please email me or reply on this post.
Rick Schumeyer rschume...@gmail.com writes:
I want to be able to search a list of articles for title words as well as
author names. I understand how to do the title words with the full text
searching. But I'm not sure the best strategy for the names. The full text
parser parses the names
58 matches
Mail list logo