Re: [GENERAL] delete a file everytime pg server starts/crashes

2017-10-06 Thread vinny
That seems like an odd requirement and I don't think PostgreSQL can do 
it itself,
because if postgresql should crash properly then the process that should 
write/remove that file would also crash


The simplest way would be to write a cronjob that connects to the 
database and does a simple query to see if things are ok,
and act accordingly. But cronjobs cannot be executed more than once a 
miunute so there would be a considerable delay.


If you need faster responses you may need to write a custom deamon or 
use something like supervisord to manage a long polling script.


I get the feeling there must be a more elegant solution to whatever your 
problem is though...


On 2017-10-05 16:04, athinivas wrote:

Hi,

I'm having a requirement to delete a file in system whenever pg server 
is

started/crashed. Any idea?

Thanks,
Athi



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multicolumn Index on OR conditions

2017-09-19 Thread vinny

On 2017-09-19 10:08, Job wrote:

Hi,

within a query with two or more conditions with "OR", example:

"where a = 2 or b < 3"

could be useful to speed up the query a multi-column index (a,b) even
though the two conditions are in "OR" and not in "AND"?

Thank you!
F


Having any kind of index usually better than none because it may prevent 
a seqscan,

but an OR probably works faster if you have a separate index on B.

But, EXPLAIN can show you what the database actually does, and give some 
indication

of why it thinks it should do it that way.

regards, Vincent.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-14 Thread vinny

On 2017-09-14 15:06, Rafal Pietrak wrote:

W dniu 14.09.2017 o 10:57, George Neuner pisze:

On Thu, 14 Sep 2017 09:45:59 +0200, Rafal Pietrak 
wrote:


Hello everybody,

Can anybody help me find a way to implement an ID which:

1. guarantees being unique across multiple tables.

2. guarantees its uniqueness not only during INSERT, but also during 
the

lifetime of the database/application (e.i. during future UPDATES).

3. guarantees persistence of value across database 
backup/restore/upgrade.


Seeing the answers I feel, I should probably have added:

4. not necessarily guarantee "planetary wide" uniquness. Meaning:
backup/restore should instantiate those ID dupplication on the second
instance of the database.



UUID is the obvious choice, but it does take a lot of space.


I was hoping for something like a database-scoped "primary key" - which
in particular does not need to be anything big provided the dataset
is small.

As far as I can tell, UUID is an ID, that is "simple/fast" to generate,
and has "extremally low" probability of collisions.

Instead I was looking for a "mechanizms/program-sql-idioms" which don't
have to be particularly efficient, but once generated, no matter what,
the uniqueness is asurred by the database. Including UPDATEs - e.i.
assignment of a completly new ID for a particular ROW.

But I understand I may quit searching - there is nothing "so simple".



If it is only one database, on one server, then couldn't you just use 
one sequence?
If oyu prefix the value with some identifier of the current table then 
you cannot get duplicates

across tables even if you reset the sequence.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] optimize pg_tables query ( text vs varchar ) ...why ?

2017-08-16 Thread vinny

On 2017-08-16 14:41, gmb wrote:

Hi
For DDL purposes we make significant use of pg_catalog tables/views.
Were investigating performance issues in a typical function:

CREATE FUNCTION tableexists( s TEXT , t TEXT ) returns boolean as
$$
  SELECT count(tablename) = 1 FROM pg_tables WHERE schemaname=$1 and
tablename=$2;
$$
language sql

When change the params of above function to VARCHAR (instead of TEXT),
performance improved dramatically.
We then changed params to NAME ( as per pg_tables column type ) , but 
the

performance stayed more or less the same.

Can somebody explain this to me ? Is there a better way in which to 
handle

these ?
(This will be implemented on most object in the catalog e.g. columns,
sequences, functions, etc )

Regards
gmb



--
View this message in context:
http://www.postgresql-archive.org/optimize-pg-tables-query-text-vs-varchar-why-tp5978592.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.



A wild stab in the dark: typecasting?
pg_tables returns 'name' type, not TEXT, so some sort of transformation 
has to be done and that takestime.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about loading up a table

2017-07-31 Thread vinny

On 2017-07-31 11:02, Alex Samad wrote:

Hi

I am using pg_dump | psql to transfer data from my old 9.2 psql into a
9.6 psql.

The new DB server is setup as master replicating to a hot standby
server.

What I have noticed is that the rows don't get replicated over until
the copy from stdin is finished...  hard to test when you have M+ lines
of rows.


If you are "just testing" then you could use the COPY command 
https://www.postgresql.org/docs/9.2/static/sql-copy.html

to generate a smaller dataset.




Is there a way to tell the master to replicate earlier


I highly doubt it, because the master cannot know what to replicate 
until

your transaction is ended with a COMMIT. If you end with ROLLBACK,
or your last query is DELETE FROM (your_table>;
then there isn't even anything to replicate at all...



or is there a
way to get pg_dump to bundle into say 100K rows at a time ?


I'm not aware of such a feature, it would be quite tricky because
of dependencies between records. You cannot simply dump the first 100k 
rows
from table A and the first 100k from table B, because row #9 from table 
A

may have a relation to row 100.001 from table B.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-28 Thread vinny

On 2017-07-28 06:31, Tim Uckun wrote:

I think it's funny that after all these years pgadmin3 is still the
most comprehensive GUI for postgres.


Have you looked at EMS SQL-Manager, I don't remember PgAdmin having any 
where near hte features that it has :-P



Even though it's prone to
crashing on my machine and I have paid for datagrip I still reach for
it first.




It depends entirely on your personal preference, not so much on the 
features of the tool,
there are whole subcultures in the IT world who swear by VIM. I tend to 
swear *at* VIM.


But in the end it's personal preference and requirements that decide 
which is the best tool.
I use DbSchema because of how quickly and visually I can create tables, 
but I use DataGrip to execute

queries to actually manage a database, create functions, views etc.

Most tools have free preview licences so download them try them out, see 
what feels good to you.
Just remember that a tool is not a substitute for knowledge, knowing 
where to click in a GUI

is not the same as knowing how to maintain a database.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-27 Thread vinny

On 2017-07-27 00:41, Tiffany Thang wrote:

Hi,
I'm new to PostgreSQL. I'm looking for a developer tool that works
similarly to TOAD or SQL Developer for Oracle/MySQL which would allow
me to view and make DDL changes to database objects and create data
models. It would be a plus if I can use the same tool to perform some
database administration tasks.

So far, I've found TOra and pgAdmin 4. Are there any other popular GUI
tools?

Thanks in advance.


There is also DbSchema, for getting a good overview of your tables.
I use Jetbrains dataGrip for maintenance etc. I find that using a GUI 
for things like creating functions is nice
but you never really learn what's going on so you are in trouble when 
you have to do something and your GUI is not available.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Row based permissions: at DB or at Application level?

2017-07-27 Thread vinny

On 2017-07-27 10:27, Thomas Güttler wrote:

Am 25.07.2017 um 12:59 schrieb vinny:

On 2017-07-25 11:40, Thomas Güttler wrote:

I would like to reduce the "ifing and elsing" in my python code (less
conditions, less bugs, more SQL, more performance)







When I look at an example policy from the manual:

CREATE POLICY fp_u ON information FOR UPDATE
   USING (group_id <= (SELECT group_id FROM users WHERE user_name = 
current_user));


I'm not sure if this is any less bug-sensitive than an IF in Python...


Somehow I trust set operations more then "if" and "else" in a
programming language.


I understand the feeling, but realistically; you're doing exactly the 
same, just in a different syntax.




And don't forget you have to interpret any error-response from the 
database into

something that Django can make understandable to the end-user.


But maybe I  misunderstood what you mean with "error-response from the
database".


Indeed you did :-)

row-level security will cause the database to start returning 
permission-denied messages,

for example:
(taken from the manual)

postgres=> update passwd set shell = '/bin/xx';
ERROR:  new row violates WITH CHECK OPTION for "passwd"

Your application will have to be able to translate that error into 
something that the user can understand.
In this case it should be something like "Sorry, this password is not 
allowed".





My current concer: I want a SELECT statement wich returns all rows a
user is allowed to see.


Sure, but the permissions in your application are not just like "John 
can see rows 1-4 and Pete can see rows that have isVisible=True"
In a forum you may have a rule that says that posting new topics is only 
allowed if you have posted more than ten replies,
and the forum allows now posts and the user is not banned. So the 
row-level permission has to check the user and the forum
to decide what to do, and those rulings cannot be done using row-level 
security so you will have to write pgsql functions
that do those checks on those records and well the whol row-level 
thing turns into a group of functions very quickly.





This is brainstorming and I am just trying to widen my horizont.
Feedback welcome!


Ditto, I'd love to hear input from others!



Regards,
  Thomas Güttler






--
Thomas Guettler http://www.thomas-guettler.de/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Row based permissions: at DB or at Application level?

2017-07-25 Thread vinny

On 2017-07-25 11:40, Thomas Güttler wrote:

I would like to reduce the "ifing and elsing" in my python code (less
conditions, less bugs, more SQL, more performance)

Regards,
  Thomas Güttler



A quick brainstorm:

You could, probably...
but you'd have to create a separate database user for every Django user,
get Django to connect to the database as that user
and setup policies for each of those users, for every use-case.

When I look at an example policy from the manual:

CREATE POLICY fp_u ON information FOR UPDATE
  USING (group_id <= (SELECT group_id FROM users WHERE user_name = 
current_user));


I'm not sure if this is any less bug-sensitive than an IF in Python...
And don't forget you have to interpret any error-response from the 
database into

something that Django can make understandable to the end-user.

I'm not saying row-level security is bad, far from it, but I doubt that 
using it
to replace Django's own security is going to magically make life much 
easier.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to define a constraint based on the values in other rows in the current table?

2017-07-19 Thread vinny

On 2017-07-19 13:37, Glen Huang wrote:

Hi,

I'd like to enforce that in a transaction, after a couple inserts &
updates, a particular column has continuous values like 1, 2, 3, and
never any gaps. Is it possible to do?

I gave a concrete example here:
https://stackoverflow.com/questions/45187113 didn't get any answers
yet.

Am I looking at the wrong direction? Should such feature be
implemented with constraints?

Thanks.


If the value is only used for sorting then the exact value doesn't 
matter,
only that there are nu duplicates. 4,5,6 orders the exact same way as 
1,2,3 or 500,540,615


You are guaranteed to get gaps anyway when you remove a record.

Personally I'd sooner create a trigger that generates a new value on 
INSERT, and that

(if you *really* feel a need to) can fix gaps on DELETE and UPDATE


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump to pg

2017-06-01 Thread vinny

On 2017-05-31 16:43, Nicolas Paris wrote:

Hi,

I have dumps from oracle and microsoft sql server (no more details).
Is it possible to load them "directly" into postgres  (without
oracle/mssql license)?

dump -> csv -> postgtres

or something ?

Thanks a lot


A very, *very* short trip to google shows that "intelligent converters" 
have tools that can do both:


https://www.convert-in.com/ora2pgs.htm

There is a trial version with limited options, and the full version 
seems to be priced at $49 to $100.


Disclaimer: I have no experience with this company or it's products, I'm 
not affiliated, I just googled

and copy/pasted the result.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error that shouldn't happen?

2017-05-19 Thread vinny

On 2017-05-18 21:48, Rob Brucks wrote:

Hello Everyone,

I am unable to figure out how the trigger was able to successfully
create the table, but then fail creating the index.  I would have
expected one thread to "win" and create both the table and index, but
other threads would fail when creating the table… but NOT when
creating the index.


First, I agree whole heartedly with the other's suggestions to "not do 
this".
Create a cronjob of whatever that prepares the required tables before 
you need them, empty tables are cheap.


Second: IF EXISTS only tells you that an object exists and is ready for 
use.
So what happens when a process is in the middle of creating that object? 
Does IF EXISTS tell you it exists or not?



What you need (accepting that this whole trigger based approach is 
probably not the best option)
is a proper locking mechanism. A "thundering herd" protection. The first 
time the trigger is triggered
it should set a lock (n advisory lock for example) that subsequent calls 
to the same trigger
can lok at to see if the table they need is being created at that time, 
so they will skip the create commands

and *WAIT* for the first process to complete before using the table.

That *WaIT* is important, and also something you probably don't want, 
especially if you have a busy database.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Python versus Other Languages using PostgreSQL

2017-05-09 Thread vinny

On 2017-05-09 11:26, Francisco Olarte wrote:

Paul:

On Tue, May 9, 2017 at 2:45 AM, Paul Hughes  wrote:
My question still remains though - why is it that all the largest 
web platforms that have used PostgreSQL *specifically* choose Python 
as their back-end language?


Do you have any data supporting that? AFAIK people tend to choose the
language first, database second, not the other way round, and many
times the platform language is nailed, but the db can be changed.


In fact, I don't think many companies/developers even choose a language
or database, but rather just use whatever they have experience in.



Why are Postgres and Python so married, in the same way that Node.js 
is largely married to MondogDB?


I do not think either of these is true.



Perhaps not in real world applications, but the vast majority of 
tutorials etc certainly

connect PHP to MySQL, Node to Mongo and Python to PostgreSQL.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [OT] Help: stories of database security and privacy

2017-04-26 Thread vinny

On 2017-04-26 11:47, Lifepillar wrote:

On 12/04/2017 10:57, vinny wrote:

On 2017-04-12 09:09, Lifepillar wrote:

So, I am here to ask if you have
interesting/(in)famous stories to share on database security/privacy
"gone wrong" or "done right"(tm), possibly with technical details


One case that I remember from an ancient version of the book "hacking
exposed"
was about a MySQL server that was running under the root user. A badly
written
application allowed some SQL injection that let a hacker issue a 
SELECT

INTO OUTFILE
query that "selected" a bash script into the .login file of the root 
user,
and the next time the root user logged in, the script would create a 
new

superuser account
for the hacker.


After tweaking MySQL to be really insecure by unsetting
secure_file_prev, using grant file, etc..., I am indeed able to write


MySQL used to be "really insecure", I'm glad to see they have taken 
measures

to prevent this attack. (now let's just hope that you cannot use SQL
to change tose security settings :-)



Correct me if I am wrong, in PostgreSQL something similar can be
achieved using lo_export(), although you must connect as a superuser to
do that (while in MySQL you may grant file system access to any user).


Technically, yes, but you cannot supply a path as easily as in MySQL.

The moral of the story is not so much that MySQL is unsafe, but that 
attacks
can come from the most unexpected places. Even from things you did not 
even know

to be possible. Again: if something sis not required to be possible,
then measures should be taken to make it impossible.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Not sure this should be asked here but...

2017-04-24 Thread vinny

On 2017-04-23 12:31, Ron Ben wrote:

A simple open source forum system can be enough simetng like php-bb
example: warez-bb.org

the installation of such system is like 1 hour of work.

In my point of view something like stack overflow is the best but i'm
not sure if it's open source.




Setting up a forum is not the problem.
The problem is that the value of the mailinglists is in the contributors 
that use it,

and they, for the most part, really do not want to stop using it.

This is why the original thread was about expanding the web-interface to 
the mailinglist,
that would allow the current mailinglist users to keep working the way 
they like to,
while opening the system up to the rest of the world in a more 
millenial-friendly way.



Did I just use the word "millenial?

sorry... I'll just go and be ashamed in a corner for a while...


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large data and slow queries

2017-04-19 Thread vinny

On 2017-04-19 13:25, Martijn Tonies (Upscene Productions) wrote:

Samuel, others,

Perhaps I'm missing something, but I'd be interested in the reasoning
behind this.

For column 'what', it seems you have no index on all values, only
indices with specific values for 'what'.

How does this speed up the search? Will PostgreSQL use those indices,
instead of using a generic index on 'what' and optionally other
columns?


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com



That's a "partial index", it only contains records that meet the 
requirements of the index definition.


https://www.postgresql.org/docs/9.5/static/indexes-partial.html

Basically; if you create an index on records where 'name = kees' then if 
your query contains "where name=kees"
the planner can just load that index and know that the records in that 
index will not contain

any other names, saving the need to filter for 'name=kees'


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large data and slow queries

2017-04-19 Thread vinny

On 2017-04-19 09:48, John R Pierce wrote:

On 4/19/2017 12:31 AM, vinny wrote:
Given the number of records, my first thought was either partitioning 
or partial-indexes.

The fewer rows are in the index, the quicker it will be to check,
and it's not a lot of work to create separate indexes for lat/long 
ranges or dates.


that only works if the planner can figure out which partitions to use
in advance, otherwise it ends up having to scan all the partitions.


--
john r pierce, recycling bits in santa cruz



True, but then again, the proposed queries are quite straight-forward
so I don't expect that to be a problem, really.

Worth a test, if only to see if it helps.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large data and slow queries

2017-04-19 Thread vinny

On 2017-04-19 07:04, Samuel Williams wrote:

Thanks John. Yes, you are absolutely right, you want the index to be
bottom heavy so you can cull as much as possible at the top. I'm
familiar with that, once implementing a brute-force sudoku solver, it
has the same principle.

I've been working on this all afternoon. By reducing the longitude,
latitude columns to float4, in my test cases, I found about 50%
improvement in performance. It may also use less space. So part of the
problem was my choice of data type. We've computed that float4 has a
worst case precision of about 1.6m which we are okay with for
analytics data.

Another option we may consider is using a (signed) integer - e.g.
longitude = 180*(v/2^31) and latitude = 180*(v/2^31) as this has a
uniform error across all points, but it's a bit more cumbersome to
handle. Is there a rational datatype in postgres which works like
this?



On 19 April 2017 at 16:42, John R Pierce  wrote:

On 4/18/2017 9:01 PM, Samuel Williams wrote:


We want the following kinds of query to be fast:

SELECT ... AND (latitude > -37.03079375089291 AND latitude <
-36.67086424910709 AND longitude > 174.6307139779924 AND longitude <
175.0805140220076);




I wonder if GIST would work better if you use the native POINT type, 
and

compared it like

mypoint <@ BOX
'((174.6307139779924,-37.03079375089291),(175.0805140220076,-36.67086424910709
))'

with a gist index on mypoint...

but, it all hinges on which clauses in your query are most selective, 
thats

where you want an index.

--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Did that 50% performance gain come from just the datatype, or that fact 
that the index became smaller?


Given the number of records, my first thought was either partitioning or 
partial-indexes.

The fewer rows are in the index, the quicker it will be to check,
and it's not a lot of work to create separate indexes for lat/long 
ranges or dates.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [OT] Help: stories of database security and privacy

2017-04-12 Thread vinny

On 2017-04-12 09:09, Lifepillar wrote:

Hi folks,
in a few weeks I will start a short course on the basics of database
security for a group of high-school students with a background in
elementary relational theory and SQL. I plan to discuss the usage of
grant/revoke, RBAC, DAC, and inference in statistical databases.

I'd like to take the opportunity to also engage students about the 
topic

of privacy (or lack thereof). So, I am here to ask if you have
interesting/(in)famous stories to share on database security/privacy
"gone wrong" or "done right"(tm), possibly with technical details (not
necessarily to share with the students, but for me to understand the
problems). I am asking to this list because I will use PostgreSQL, so
maybe I can collect ideas that I can implement or demonstrate in
practice, or use as case studies.

Thanks in advance,
Life.


One case that I remember from an ancient version of the book "hacking 
exposed"
was about a MySQL server that was running under the root user. A badly 
written
application allowed some SQL injection that let a hacker issue a SELECT 
INTO OUTFILE
query that "selected" a bash script into the .login file of the root 
user,
and the next time the root user logged in, the script would create a new 
superuser account

for the hacker.

I remember this particular example mainly because of the way that people 
I told it to reacted;
some were of the opinion that the application was at fault for allowing 
injection,

some thought the DBA was to blame for running as root,
but the vast majority did not know that MySQL could write files, let 
alone overwrite system files.


Their responses really made it clear that hackers generally know a lot 
more about
how a setup works than it's maintainer does. Just because you cannot 
think of a way that a right can be exploited


Ever since then I live by the motto; "If it's not absolutely required to 
be possible,

then it should be made absolutely impossible.".


As for privacy, the same applies; if a website doesn't have to print the 
real lastname of a user,
then the JSON API should not send that to the client. In fact, the API 
should refuse to send it, even when asked,
unless the user who's asking has rights to do so. Again; denied unless 
specifically allowed.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] browser interface to forums please?

2017-04-05 Thread vinny

On 2017-04-05 15:11, Vincent Veyron wrote:

On Tue, 04 Apr 2017 12:01:24 +0200
vinny <vi...@xs4all.nl> wrote:


Every time I tell someone about the mailinglists I then have to 
explain

how they can subscribe, how to create folders, filters etc. And more
often than not
they just say forget it and go to some forum.


On forums, all you see is the header for the discussion, and the
number of messages attached to it.

It makes it much more difficult to follow discussions, because you
don't know if there are new messages or not, unless you memorized how
many were there the last time you looked at it. And even then, you
can't tell whether you even read them previously or not, which a
mailing list will tell you, because the messages are marked.


It depends entirely on which forum software you use.
If keeping track of read messages is a requirement then you would 
obviously

use a forum that does that for you.

But again, I'm not saying the mailinglist should be replaced by a forum.
What I'm saying is that many users find forums a lot easier to use and 
give the choice,
they will opt for the forum. Hence it makes sense to provide something 
for those users,

if there is the manpower to do so.


Can you expect Joe
Average to do something like that
if they want to get more involved in PgSQL?



How hard is it to subscribe, create a folder and a filter? If that is
too involved, I don't see how they can get involved in postgres
anyway.


That might be true if you are talking about contributors, sure, but 
we're not.
Or at least, I'm not, and I guess that's where I'm mistaking. Perhaps 
the mailinglists

are the way they are to encourage the more serious users to use them,
and keep everyday questions out a little.

That would be fine too, but don't put it like "if you this is too much 
work, you shouldn't be using postgresql".



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] browser interface to forums please?

2017-04-04 Thread vinny

On 2017-04-04 15:04, Stephen Frost wrote:

Greetings,

* vinny (vi...@xs4all.nl) wrote:


And yes, I can probably setup my email to do something like that,
the point is that I shouldn't have to.


I'm all for improving things and adding automation where it'll help, 
but

the infrastructure is basically run by volunteers.  Making statements
like "I shouldn't have to" isn't the best approach to getting the
changes you'd like to see happen done.


I meant it as "in an ideal world". It's a bit like buying a car
and finding out that they have not put the wheels on. It's not difficult 
to put them on yourself,
but you kind of expect that the people who want you to user their car 
would do that for you.


Anyway, thanks for the response!



Thanks!

Stephen



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] browser interface to forums please?

2017-04-04 Thread vinny

On 2017-03-27 23:23, Steve Litt wrote:

On Mon, 27 Mar 2017 11:31:02 +0900
Michael Paquier  wrote:


If you have subscribed to more mailing lists than -general, having one
subfolder per list can also help a lot, grouping as well some of those
having a low activity, for example:
- one folder for -hackers and -hackers-cluster.
- one folder for -general.
- one folder for -jdbc and -odbc.
- one for -bugs and -docs.
- one for -jobs and -announce, etc.
Something like that will make your hacking activity way easier to
handle. I would bet that a lot of people around here do that.


I sure do. I have a heck of a lot of email in a heck of a lot of
folders, all stored in a nice, easy to drill down hierarchy. That
hierarchy is maintained by the Dovecot IMAP server that runs on my
desktop computer.


I'm not against mailinglists at all, but I am for ease of use, 
especially for newcomers.


Every time I tell someone about the mailinglists I then have to explain
how they can subscribe, how to create folders, filters etc. And more 
often than not

they just say forget it and go to some forum.


When it comes to having a
lively group discussion that focuses all minds into a supermind greater
than the sum of the parts, a mailing list is the best tool.


Well, in the end, it's not the fact that it's a mailinglist that makes 
the community great,
it's just the fact that the active members share a methodof 
communication that they all like to use.
Getting notifications of new messages is probably the single most 
important feature to keep discussions going

and email provides that.

The thing is; mailinglists are far from userfiendly if you are not used 
to them.
Even in this thread several people have explained how much work they 
have done to get it
into a state where they can easily work with it. Can you expect Joe 
Average to do something like that

if they want to get more involved in PgSQL?

Now, I'm not saying the mailinglists should go, I'm saying there should 
be an easier way
to access them. It should be possible to register on the site, post a 
message and read replies,
without having to subscribe to the list and setup a way of dealing with 
the influx of messages

that are, for the most post, simply not interesting to the average user.

I'd love to have an RSS feed that contains only new questions, so I can 
just watch the popup
on my screen the way I do with the rest of the world, and not have to 
deal with replies to topics that I don't care about anyway.


And yes, I can probably setup my email to do something like that, the 
point is that I shouldn't have to.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Reverse" inheritance?

2017-04-04 Thread vinny

On 2017-04-04 09:12, Tim Uckun wrote:

I agree with the barking up the wrong tree, building a physical tree

in tables doesn't sound right
given that you will have to create a new branch in the tree when a new
version/variation of ubuntu comes out.

This doesn't bother me that much.


It should. You are using tables as data, which is pretty much always a 
smell of bad design.
I could be pedantic and ask how you would store unix version "14.5 
\%funky penguin%/ rev 1,5"
given that most of that name consists of characters that are not allowed 
in a table name.





Also think about how you are going to do basic queries like listing

all known unix variants; if that is hidden in the table namesthen
you'll have to issue DDL queries to do the work of SELECT queries,
which just sounds wrong to me.

Yes this might be a problem but one I could easily overcome.


Sure, but why would you though? You already have everything in place for 
creating records,

why bother creating a different system just for the unix versions?





I'd go for a tree, possibly using recursive CTE's to dig it.


I was thinking a window function but yea I am sure there is a way to
do it with a flat table.


I'm not sure you can do it with windowing actually,
given that you'd have to sort every record based on a match with the 
previous record.

But I've never tried it because CTE's make it so easy :-)



On Tue, Apr 4, 2017 at 6:43 PM, vinny <vi...@xs4all.nl> wrote:


I agree with the barking up the wrong tree, building a physical tree
in tables doesn't sound right
given that you will have to create a new branch in the tree when a
new version/variation of ubuntu comes out.

Also think about how you are going to do basic queries like listing
all known unix variants; if that is hidden in the table names
then you'll have to issue DDL queries to do the work of SELECT
queries, which just sounds wrong to me.

I'd go for a tree, possibly using recursive CTE's to dig it.

On 2017-04-04 05:19, Tim Uckun wrote:
I have thought of doing something like a single table inheritance
and it
could be done but I thought this might be a little more elegant.

On Tue, Apr 4, 2017 at 2:15 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

On Mon, Apr 3, 2017 at 7:07 PM, Tim Uckun <timuc...@gmail.com>
wrote:

I am trying to make postgres tables work like an object hierarchy.
As an
example I have done this.

​I suspect you are barking up the wrong tree ;)

You are probably better off incorporating something like the "ltree"
type
to encode the taxonomy.

https://www.postgresql.org/docs/current/static/ltree.html [1]

I haven't had a chance to leverage it myself but the concept it
embodies
is solid.

David J.
​




Links:
--
[1] https://www.postgresql.org/docs/current/static/ltree.html



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Reverse" inheritance?

2017-04-04 Thread vinny
I agree with the barking up the wrong tree, building a physical tree in 
tables doesn't sound right
given that you will have to create a new branch in the tree when a new 
version/variation of ubuntu comes out.


Also think about how you are going to do basic queries like listing all 
known unix variants; if that is hidden in the table names
then you'll have to issue DDL queries to do the work of SELECT queries, 
which just sounds wrong to me.


I'd go for a tree, possibly using recursive CTE's to dig it.

On 2017-04-04 05:19, Tim Uckun wrote:
I have thought of doing something like a single table inheritance and 
it

could be done but I thought this might be a little more elegant.

On Tue, Apr 4, 2017 at 2:15 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:


On Mon, Apr 3, 2017 at 7:07 PM, Tim Uckun  wrote:

I am trying to make postgres tables work like an object hierarchy. As 
an

example I have done this.



​I suspect you are barking up the wrong tree ;)

You are probably better off incorporating something like the "ltree" 
type

to encode the taxonomy.

https://www.postgresql.org/docs/current/static/ltree.html

I haven't had a chance to leverage it myself but the concept it 
embodies

is solid.

David J.
​




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] count case when - PG 9.2

2017-03-09 Thread vinny

On 2017-03-09 05:27, Patrick B wrote:

Hi guys. How can I count using 'CASE WHEN'?

Example:


SELECT



CASE



WHEN AND c.regdate > EXTRACT(epoch FROM (now() - INTERVAL
'14 day'))



THEN 'trial'



WHEN last_pay > EXTRACT(epoch FROM (now() - INTERVAL '37
day'))



THEN 'paying'



END as account_status,



c.id [1]



FROM public.clients c



WHERE (



(last_pay > EXTRACT('epoch' FROM now() - '12
Months'::INTERVAL))



)



ORDER BY 1

 I wanna know how many of 'trial' and 'paying' customers the query
returns. can you guys please advice how to do it?

Thanks
Patrick



comparisons like "A>B" return a boolean. Booleans can be cast to 
integers, and integers can be summed.


SUM((A>B)::int)

But depending on the situation, indexes etc it could be faster to run e 
separate count query, you'll have to test that.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import CSV file to a table

2017-03-08 Thread vinny

On 2017-03-08 10:13, Günce Kaya wrote:

Hi all,

I want to import content of CSV file to a table via bash script
without creating temporary table and I also want to skip some columns
in CSV file (for instance, CSV file has 12 column and main table has
only 2 column, If possible I would use only 2 column in CSV file) Is
there any way to do it?

Regards,

--

Gunce Kaya


This is more a programming question than a database question, and there 
are many possible solutions.
Do *not*, whatever you do, try to write your own piece of code to read 
the CSV. There are lots of unexpected
ways that the CSV file can be slightly different from what you expect, 
and figuring all those out is a waste of time.
The example of embedded comma's is just one way, there could also be 
newlines, linebreaks, utf8-escape characters etc.


Personally I'd go the python route because it's simple and 
straightforward, but anything you are comfortable with will do.
If you are going to install additional software to do this then remember 
that you'll need that same software again if
you need to do this again, or when you need to move this code to a 
different server.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, apps, special characters and UTF-8 encoding

2017-03-07 Thread vinny

On 2017-03-08 00:20, Ken Tanzer wrote:

Hi.  I've got a recurring problem with character encoding for a
Postgres-based web PHP app, and am hoping someone can clue me in or at
least point me in the right direction.  I'll confess upfront my
understanding of encoding issues is extremely limited.  Here goes.




  And that one way or another, the encoding
needs to be translated before it can be placed into the database.



Ken

--



You don't really have to translate the encoding, because all parts of 
the system

are capable of dealing with all encodings.

What you have to make sure that that they are indeed all working in the 
same encoding.
You have to set the encoding of the HTML document, the database, and the 
database connection
to the same encoding, like utf8. People tend to forget the "set names" 
on the database connection,
which can make the database think you are sending latin1, but you are 
really sending utf-8, and presto problemo.


Then the only problem left is that PHP doesn't do utf-8 very well 
internally
 so if you receive data from an UTF-8 page and want to substring etc 
then you have to use the multibyte variants
of those functions. You could convert everything back to latin1 first, 
but then

you might as well just do everything in latin1 in the first place.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trim performance on 9.5

2016-11-18 Thread vinny

On 2016-11-18 15:06, William Ivanski wrote:

Hi,

I recently did major improvements on perfomance on our routines by
simply removing the call for trim functions on specific bottlenecks.
Please see images attached for a simple example.

I'm using PostgreSQL version 9.5.5-1.pgdg80+1 on Debian 8.6. Someone
knows if it's a bug on trim function? Thanks in advance.

--

William Ivanski


Did you run EXPLAIN on these queries?

I'm guessing that you have an index on the field, but not on 
TRIM(field),
which would mean that the database is forced to seqscan to fetch every 
row value, trim it and then compare it.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange? BETWEEN behaviour.

2016-10-20 Thread vinny

On 2016-10-20 14:27, Bjørn T Johansen wrote:

On Thu, 20 Oct 2016 14:04:51 +0200
vinny <vi...@xs4all.nl> wrote:


On 2016-10-20 13:51, Bjørn T Johansen wrote:
> I have the following SQL:
>
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> 23:59:59','DD.MM.
> HH24:MI:SS')
>
> date is of type timestamp.
>
> I was expecting to get all the records that had datepart = 20.10.2016
> but I am not getting that..
>
> What am I missing?
>
>
> Regards,
>
> BTJ
>

What are you getting?




The sql returns 5 of the expected 72 rows...

BTJ


Sure, but what I meant was more like: what data do you have in the 
records, which type, and which values are not getting through?


Can you reproduce the problem in a simple example?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange? BETWEEN behaviour.

2016-10-20 Thread vinny

On 2016-10-20 13:51, Bjørn T Johansen wrote:

I have the following SQL:

SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
23:59:59','DD.MM.
HH24:MI:SS')

date is of type timestamp.

I was expecting to get all the records that had datepart = 20.10.2016
but I am not getting that..

What am I missing?


Regards,

BTJ



What are you getting?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance question: Commit or rollback?

2011-12-24 Thread vinny
On Sun, 2011-12-18 at 14:05 +1100, Chris Angelico wrote:
 On Sun, Dec 18, 2011 at 2:02 PM, Chris Travers chris.trav...@gmail.com 
 wrote:
  I do not believe there are performance penalties for either.  All
  commit or rollback does is determine visibility of changes made.
 
 Thanks. (And thanks for the incredibly quick response!)
 
 My framework has a read-only mode (determined by user-level access),
 in which it begins a read-only transaction. At the end of it, I
 currently have it rolling the transaction back (to make absolutely
 sure that no changes will be made), but was concerned that this might
 place unnecessary load on the system. I'll stick with rolling back,
 since it's not going to hurt!
 
 Chris Angelico
 

The actual rollback won't hurt as long as you have not made any
modificatons to any records. But opening the transaction could have side
effects for other processes that want to modiy the records that you want
to protect in your read-only transaction.

How about using a databaseuser that has it's create/update/delete rights
revoked? That will cause an error if the supposedly read-only routine
does try to change data.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance question: Commit or rollback?

2011-12-24 Thread vinny
On Sat, 2011-12-24 at 23:49 +1100, Chris Angelico wrote:
 On Sat, Dec 24, 2011 at 11:46 PM, vinny vi...@xs4all.nl wrote:
  The actual rollback won't hurt as long as you have not made any
  modificatons to any records. But opening the transaction could have side
  effects for other processes that want to modiy the records that you want
  to protect in your read-only transaction.
 
  How about using a databaseuser that has it's create/update/delete rights
  revoked? That will cause an error if the supposedly read-only routine
  does try to change data.
 
 The readonly-ness of the session is defined based on information
 stored in the database, so that would entail the cost of
 re-authenticating.

Yes you would have to re-authenticate, you'd have to weigh the time-cost
of that that against any performance hits the transaction might cause.

 Also, we want to minimize debugging time by having
 both read-only and read-write access use almost exactly the same code
 and DB access, meaning that we should not need to test every module in
 every mode.

So, your read-only mode is basically a flag that forces your code to
always issue a rollback at the end, instead of a commit for read/write
mode.

I find that a bit scary. :-)

regard,
Vincent.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: R: [GENERAL] Field's position in Table

2009-08-24 Thread vinny
On Thu, 2009-08-20 at 12:16 +0100, Sam Mason wrote:
 On Thu, Aug 20, 2009 at 11:24:49AM +0200, vinny wrote:
  I can't really think of any real reason to put the field at a
  particular position, applications don't reallty care about the order
  of fields.
 
 Because it's very convenient for ad-hoc queries!  PG currently assumes
 that the column order is the same as when it was created but there are
 (unimplemented) suggestions about how to fix this.  See for example:
 
   http://archives.postgresql.org/pgsql-hackers/2006-12/msg00983.php
 
 -- 
   Sam  http://samason.me.uk/
 

But how is it convenient exactly, is it just a timesaver so you can
SELECT * instead of having to type SELECT firstname, lastname, email?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: R: [GENERAL] Field's position in Table

2009-08-20 Thread vinny
On Thu, 20 Aug 2009 09:21:25 + (GMT), Scara Maccai m_li...@yahoo.it
wrote:
 When adding a new field in the existing table, i
 want to add the filed in a particular position. 
 
 I'm afraid the only way would be re-writing the whole table (pseudo
sql):
 
 BEGIN;
 create table newtable as select field1, 'newfield default value', field2
 from old_table;
 create_all_indexes on newtable;
 drop old_table;
 commit;
 
 things get complicated if you have foreign keys pointing to old_table...

Which is why you might be better off putting the new field at the end of
the table
and using an administrative view to make your viewing easier.

I can't really think of any real reason to put the field at a particular
position, applications
don't reallty care about the order of fields.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] backups

2004-07-01 Thread vinny
On Wed, 30 Jun 2004 22:32:26 -0500, [EMAIL PROTECTED] (Bruno Wolff III)
wrote:

On Wed, Jun 30, 2004 at 18:23:08 -0500,
  [EMAIL PROTECTED] wrote:
 
 What do other sites with mondo databases do?

There have been comments from people using storage systems that they
can freeze the storage system and get a consistant snap shot of the
file system. This can be used to do a restore. It will look just like
postgres crashed when coming back up.
If you find one of the posts about this in the archives the poster may
have more details on their storage systems.

---(end of broadcast)---
TIP 8: explain analyze is your friend

I've been playing around with something like that.
On my test server I have put the postgresql directory (including the
config files) onto a software raid-1 array. This array starts off as
just one disk, but when the time comes to create a backup, you can add
a secondary disk to the array, on-the-fly, so the database does not
have to stop for this. The recovery-synchronosing of the disk consumes
a few % of the CPU, but nothing too bad (it's disk-to-disk copying)

When syncing is complete I shutdown the database, remove the secondary
disk from the array and start the database up again. Ofcourse this is
in a test environment so this operation takes a few seconds, I have
yet to test what this will do with a normal production load.

Now the secondary disk is an exact copy of the datafiles as they were
when the database was offline, and because it is software-raid, the
secondary disk can now be mounted and backed-up. And because the files
were in an offline state at backup, they can be restored without the
database server having to recover at startup.

It seems to work ok in the test, but ofcourse this has to be tested on
a much much larger scale.

---(end of broadcast)---
TIP 8: explain analyze is your friend