Re: [GENERAL] world meaningful date time input

2010-06-17 Thread Dennis Gearon
Thanks Scott

That's basically what I'm planning on doing, and hopefully described. The 
server will only in 1-10 locations around the world, and I can't use the 
timezone of the servers anyway, nor the user's input device/browser/phone. The 
offset/timezone has to be the one for the geographical location of the datum.

But the process you described went one further than I knew, the output in the 
local tz. Thanks for that.


 Dennis Gearon gear...@sbcglobal.net
 wrote:
 
  I've got an application brewing that gathers the
 following data:
   location (lat/lon)
   time (no time zone)
   date. (no time zone)
 
  The eventual goal is to be able to search
 chronologically using timestamps for the data anywhere in
 the world, from any location, using local time as a
 reference for any future date and time.
 
  From the (lat/lon) it's possible to get:
   named time zone
   standard time zone offset (non dst)
   by some special web services, get dates and amounts
 of
 day light savings time
 
  From there, it could possible to combine all the
 datums and create a timestamp with timezone (i.e. it's
 stored in absolute time (in seconds) relative to GMT)
 
  Any easier way to do this?
 
 Why not set the tz to the one the date / time came from,
 insert into
 timestamptz, then use timestamptz at timezone to retrieve
 it?
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Given N, finding the interval of N hours with max(sum(..))

2010-06-17 Thread stefano bonnin
Hi all,
I'm trying to make a query that, given N and a date, gives me the interval
of N hours with the max(sum(...)). In others words, given the following
table:

2010-06-16 00:00:00 | 0
2010-06-16 01:00:00 | 2
2010-06-16 02:00:00 | 1
2010-06-16 03:00:00 | 5
2010-06-16 04:00:00 | 7
2010-06-16 05:00:00 | 9
2010-06-16 06:00:00 | 2
2010-06-16 07:00:00 | 0
2010-06-16 08:00:00 | 0
2010-06-16 09:00:00 | 0
2010-06-16 10:00:00 | 1
2010-06-16 11:00:00 | 2
2010-06-16 12:00:00 | 1
2010-06-16 13:00:00 | 1
2010-06-16 14:00:00 | 1
2010-06-16 15:00:00 | 1
2010-06-16 16:00:00 | 1
2010-06-16 17:00:00 | 1
2010-06-16 18:00:00 | 1
2010-06-16 19:00:00 | 1
2010-06-16 20:00:00 | 1
2010-06-16 21:00:00 | 0
2010-06-16 22:00:00 | 3
2010-06-16 23:00:00 | 1

With N = 3 and date = '2010-06-16' for example ... I'd like that the query
gives me as response: the interval from 3AM and 5AM.
Any solutions and/or hints?

Thanks in advance.
stefano


Re: [GENERAL] Question about indexes on tables with inherited children

2010-06-17 Thread Sergey Konoplev
On 16 June 2010 19:07, Bryan Montgomery mo...@english.net wrote:
 If I do select * from Table_A a, Table_X x where a.id = x.id
 and it only returns 1 or a few rows, should it use an index if I have an
 index built on Table_A using id?

It should because it scans Table_A too.

 Or do I need an index individually on B  C?

Yes. Indexes are not inherited.


-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Wrong number of empty toast pages

2010-06-17 Thread Jacek Trzmiel

Hi,

I have a problem with growing toast table in pgsql 8.3.5.  Probably 
raising max_fsm_pages and running autovacuum more aggresively will fix 
it, but during investigation I've got this puzzling output from VACUUM 
ANALYZE VERBOSE:

INFO:  pg_toast_16505: found 756385 removable, 23003704 nonremovable row 
versions in 28446864 pages
DETAIL:  14586 dead row versions cannot be removed yet.
There were 92404502 unused item pointers.
22769942 pages contain useful free space.
42 pages are entirely empty.


So there are 23M rows in 28.5M pages.  Assuming worst case of 1 row per 
page, there should be 5.5M completely free pages.  But vacuum says that 
only 42 are entriely empty.   Am I missing something obvious here?


Best regards,
Jacek.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hi,i write a function in postgresql source code, how to register this function?

2010-06-17 Thread Dimitri Fontaine
sunpeng blueva...@gmail.com writes:

 hi,i write a function in postgresql source code, how to register this 
 function?

See src/include/catalog/pg_proc.h

But you should *really* consider making it a loadable module. That's the
way it makes sense for any code you want to add in the server unless
you're preparing a patch for PostgreSQL itself, or you're doing a new
Index Access Method that you want crash safe.

Regards,
-- 
dim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Felde Norbert
Before I began I made a filesystem level backup after I stopped the
postgres service.
I have the original 0003 file, the size is 204800, The size of the
other files in this dir is 262144.

I corrected the permissions of the whole data dir, but the error
message is the same.
The exact messages:
The message is the same for the original pg_clog/0003, the 0003
containing binary 0 and after pg_resetxlog:
pg_dump: Error message from server: ERROR:  could not access status of
transaction 3974799
DETAIL:  Could not read from file pg_clog/0003 at offset 204800: No error.
pg_dump: The command was: COPY public.active_sessions_split (ct_sid,
ct_name, ct_pos, ct_val, ct_changed) TO stdout;
pg_dump: *** aborted because of error


If create the bigger 0003 containing 0 than I get that:
pg_dump: Error message from server: ERROR:  xlog flush request
0/A19F5BF8 is not satisfied --- flushed only to 0/A02A1AC8
CONTEXT:  writing block 1149 of relation 1663/4192208/4192508
pg_dump: The command was: COPY public.history (historyid, adatkod,
elemid, userid, ido, actionid, targyid, szuloid, opvalue, longfield,
longtext) TO stdout;
pg_dump: *** aborted because of error

Thanks,
fenor

2010/6/17 Scott Marlowe scott.marl...@gmail.com:
 On Wed, Jun 16, 2010 at 5:55 PM, Felde Norbert feno...@gmail.com wrote:
 Hi all,

 I use 8.2 on a windows server 2008.
 Suddenly postgres crashed and I can not do anything. The message is
 Could not access status of transaction 3982736.
 DATAIL: Could not read from file pg_clog/0003 at offset 204800: No error.

 The only one thing I found to correct this is to create a file filled
 with binary 0 and replace clog/0003. Both files size are the same. I
 tried that but I am still not able to run vacuum.

 Was the original 0003 file there, and if so what size was it?  Did you
 try to copy it out of the way first?  Did it belong to the pgsql user?
  Did it have any attributes set that would make it impossible to read?
  Are you running anti-virus on this machine? It's known to get in the
 way and cause these kinds of problems...

  I tried even with a
 bigger empty clog/0003 file, but nothing.
 I tried to drop the last few transaction with pg_resetxlog and hoped I
 can save some data, but the service wont start.
 The last error message is:
 ERROR: could not create relation 1663/0/1224: File exists. I looked
 for base/1224/0/1663, base/1663/1224/0 and base/1663/0/1224 file but
 there is no such file.

 Can anyone suggest something?
 Many data would be lost if I can not repaire that so please!

 Thanks,
 fenor

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




 --
 When fascism comes to America, it will be intolerance sold as diversity.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Monitoring activities of PostgreSQL (Everlasting function execution)

2010-06-17 Thread Allan Kamau
On Wed, Jun 16, 2010 at 1:15 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
 On 16/06/10 16:56, Allan Kamau wrote:

 The function I have mentioned above seems to hang indefinitely (now
 12hours and counting) while it took only 2secs for the many times it
 was previously invoked from the client application (with slightly
 different parameter value at each such time).

 Is there anything in pg_catalog.pg_locks that relates to that query?

 If you attach a debugger to the stalled backend and get a backtrace,
 what's the output? See:

 http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

 If you strace the problem backend, does it appear to be doing anything?

 --
 Craig Ringer

 Tech-related writing: http://soapyfrogs.blogspot.com/


Thank you Craig, this could be what I am looking for, I am following
the HOWTO in the resource you've sent and will attempt to capture the
trace when the next time the backend sleeps.

Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring activities of PostgreSQL

2010-06-17 Thread Allan Kamau
2010/6/16 Filip Rembiałkowski filip.rembialkow...@gmail.com:

 2010/6/15 Allan Kamau kamaual...@gmail.com

 I do have a PL/SQL function that gets executed called many times but
 with different parameter values each of these times. For most
 invocations of this function run in a couple of seconds however some
 invocations of the same function run (on the same dataset) for hours
 with very little disk activity but high CPU.



 How can I monitor the actual DB activities during such times so I may
 better understand what the situation truly is.

 You can monitor system parameters (CPU, disk IO, memory) with standard
 OS-specific tools (on Unix: top, ps iostat,vmstat)

 I have seen some users
 on this list posting some complex log/outputs, this are the kind of
 outputs I would like to capture and view. Where are they?


 they are mostly in log files or output from EXPLAIN ANALYZE command


 for general info see

 http://www.postgresql.org/docs/current/static/runtime-config-logging.html
 http://www.postgresql.org/docs/8.4/static/using-explain.html
 http://wiki.postgresql.org/wiki/Performance_Optimization

 and this list archives.


 Filip




Thank you Filip, Craig and Dimitri, I will implement the trace tools as advised.


Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Monitoring activities of PostgreSQL (Everlasting function execution)

2010-06-17 Thread Allan Kamau
On Wed, Jun 16, 2010 at 1:15 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
 On 16/06/10 16:56, Allan Kamau wrote:

 The function I have mentioned above seems to hang indefinitely (now
 12hours and counting) while it took only 2secs for the many times it
 was previously invoked from the client application (with slightly
 different parameter value at each such time).

 Is there anything in pg_catalog.pg_locks that relates to that query?

 If you attach a debugger to the stalled backend and get a backtrace,
 what's the output? See:

 http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

 If you strace the problem backend, does it appear to be doing anything?

 --
 Craig Ringer

 Tech-related writing: http://soapyfrogs.blogspot.com/



The pg_catalog.pg_locks table does indeed have 20 records pertaining
to the everlasting function
(mining.perform_itemset_fragment_mining()) execution. Please have a
look at the query output below.


po12= SELECT a.* FROM pg_stat_activity a;
  datid  | datname | procpid | usesysid | usename |
 current_query  | waiting |
  xact_start   |
  query_start  | backend_start |
client_addr | client_port
-+-+-+--+-++-+---+
---+---+-+-
 1936449 | po12|   14277 |16392 | test| SELECT a.* FROM
pg_stat_activity a;| f   |
2010-06-17 13:56:50.129498+03 | 201
0-06-17 13:56:50.129498+03 | 2010-06-17 13:15:23.018817+03 |
  |  -1
 1936449 | po12|   14657 |16392 | test| SELECT
mining.perform_itemset_fragment_mining($1,$2,$3,$4,$5,$6,$7,$8) | f
   | 2010-06-17 13:33:13.315609+03 | 201
0-06-17 13:33:13.319398+03 | 2010-06-17 13:22:33.606603+03 | 127.0.0.1
  |   36211
(2 rows)

po12= SELECT a.* FROM pg_catalog.pg_locks a;
   locktype| database | relation | page | tuple | virtualxid |
transactionid | classid |  objid  | objsubid | virtualtransaction |
pid  |mode | grant
ed
---+--+--+--+---++---+-+-+--++---+-+--
---
 object|  1936449 |  |  |   ||
  |1247 | 3546410 |0 | 1/6500 | 14657
| AccessExclusiveLock | t
 virtualxid|  |  |  |   | 1/6500 |
  | | |  | 1/6500 | 14657
| ExclusiveLock   | t
 object|0 |  |  |   ||
  |1260 |   16392 |0 | 1/6500 | 14657
| AccessShareLock | t
 relation  |  1936449 |  3529708 |  |   ||
  | | |  | 1/6500 | 14657
| AccessShareLock | t
 relation  |  1936449 |  3546485 |  |   ||
  | | |  | 1/6500 | 14657
| AccessExclusiveLock | t
 transactionid |  |  |  |   ||
  1548543 | | |  | 1/6500 | 14657
| ExclusiveLock   | t
 relation  |  1936449 |  3546409 |  |   ||
  | | |  | 1/6500 | 14657
| AccessExclusiveLock | t
 relation  |  1936449 |  3529699 |  |   ||
  | | |  | 1/6500 | 14657
| AccessShareLock | t
 relation  |  1936449 |  3546477 |  |   ||
  | | |  | 1/6500 | 14657
| AccessExclusiveLock | t
 relation  |  1936449 |  3529709 |  |   ||
  | | |  | 1/6500 | 14657
| AccessShareLock | t
 virtualxid|  |  |  |   | 2/271  |
  | | |  | 2/271  | 14277
| ExclusiveLock   | t
 relation  |  1936449 |  3546483 |  |   ||
  | | |  | 1/6500 | 14657
| ShareLock   | t
 relation  |  1936449 |10969 |  |   ||
  | | |  | 2/271  | 14277
| AccessShareLock | t
 relation  |  1936449 |  3529706 |  |   ||
  | | |  | 1/6500 | 14657
| AccessShareLock | t
 relation  |  1936449 |  3546486 |  |   ||
  | | |  | 1/6500 | 14657
| RowExclusiveLock| t
 relation  |  1936449 |  3546486 |  |   ||
  | | |  | 1/6500  

Re: [GENERAL] Given N, finding the interval of N hours with max(sum(..))

2010-06-17 Thread stefano bonnin
I have found an initial solution. For an interval N of 3 hours the query can
be:

select distinct poi,first_value(start_date) OVER w as
start_date,first_value(end_date) OVER w as end_date,first_value(the_sum)
OVER w from

(select poi,t1.ts as start_date,
   t1.ts + '3 hour'::interval as end_date,
   (select sum(apcpsfc)
   from event_forecast_data t2
   where t2.ts = t1.ts
   and t2.ts = t1.ts + '3 hour'::interval and poi = t1.poi
group by poi
   ) as the_sum
from event_forecast_data t1
where ts  '2010-06-18 00:00:00' and ts + interval '3 hours' = '2010-06-19
00:00:00'

group by poi,ts) as stats

WINDOW w AS (partition by poi order by the_sum desc)


order by poi,start_date

where initially I get the sum for all intervals of 3 hours of the day and
after with the window functions I get the row with the higher value ...

2010/6/17 stefano bonnin stefano.bon...@gmail.com

 Hi all,
 I'm trying to make a query that, given N and a date, gives me the interval
 of N hours with the max(sum(...)). In others words, given the following
 table:

 2010-06-16 00:00:00 | 0
 2010-06-16 01:00:00 | 2
 2010-06-16 02:00:00 | 1
 2010-06-16 03:00:00 | 5
 2010-06-16 04:00:00 | 7
 2010-06-16 05:00:00 | 9
 2010-06-16 06:00:00 | 2
 2010-06-16 07:00:00 | 0
 2010-06-16 08:00:00 | 0
 2010-06-16 09:00:00 | 0
 2010-06-16 10:00:00 | 1
 2010-06-16 11:00:00 | 2
 2010-06-16 12:00:00 | 1
 2010-06-16 13:00:00 | 1
 2010-06-16 14:00:00 | 1
 2010-06-16 15:00:00 | 1
 2010-06-16 16:00:00 | 1
 2010-06-16 17:00:00 | 1
 2010-06-16 18:00:00 | 1
 2010-06-16 19:00:00 | 1
 2010-06-16 20:00:00 | 1
 2010-06-16 21:00:00 | 0
 2010-06-16 22:00:00 | 3
 2010-06-16 23:00:00 | 1

 With N = 3 and date = '2010-06-16' for example ... I'd like that the
 query gives me as response: the interval from 3AM and 5AM.
 Any solutions and/or hints?

 Thanks in advance.
 stefano



[GENERAL] [Blatant Alias Abuse] Anyone want to help out with a coding problem?

2010-06-17 Thread Mike Christensen
Hi all – I apologize for the misuse of this alias, but I figured I
have the audience that might be interested in helping me out with a
rather difficult coding problem.  In fact, if I can find someone who’s
looking for some part-time work and is interested in a bit of extra
cash, then it’ll be worth it.

First off, a friend and I are starting a little dot-com here in
Seattle that focuses on recipes and meal planning.  We’re building on
a typical LAMP stack (Linux, Apache, Mono, Postgres – why what did you
think I meant?) so we need someone with those skills as well as pretty
serious computer science skills (probably Masters of PhD).

Here’s the problem.  Imagine you have a set of one or more recipes.
Each recipe requires one or more ingredients along with an amount of
that ingredient.  You also have a given user, who has an available set
of ingredients and amounts in their inventory.  The job is to find the
most efficient set of n recipes you can make with the given
ingredients, resulting in the fewest number of leftover amounts.
Sounds like your typical NP-Complete “multiple-knapsack” problem,
right?  I’ll make it a bit more complicated now.  This user also has a
set of “ratings” associated with zero or more recipes (0 stars, user
hates the recipe, 5 stars, they love it).  One of the parameters of
the function is to take a sliding scale (between 1 and 10) that
indicates how much their ratings will play into the recipes the
function chooses.  A scale of zero will tell the algorithm to ignore
their ratings and pick the most efficient set.  A value of ten will
try to use their available ingredients if possible, but worry more
about picking recipes they’re probable to like.  A default setting of
5 would pick some sort of even balance.

Here’s the data you have to work with:

-   All recipes will be in RAM, no database access required.  The
recipes contain only a guid, a list of required ingredients and
amounts, and a public average rating.
-   There might be 100,000 recipes for all you know, so a brute-force
“try every combination” approach is out of the question.  Scalability
is key.
-   I’d prefer an algorithm that results in more accurate results the
longer it runs, as the available system resources and system load
could dictate how long to run the algorithm before telling it to stop.
-   The engine is given a user profile (their inventory, all their
ratings, and how many recipes they want to cook)

Here’s my requirements for a solution:

-   Scalable of course.  Needs to be fast with lots of recipes, lots of
ratings, and able to work with multiple users at once.  RAM is cheap.
-   Algorithm functions as both a “suggestions” engine (think of the
Netflix algorithm for movie recommendations) and a “what can I make?”
algorithm, and can trend towards one side or the other.
-   All code needs to be written in C#.  I have a basic outline and all
data structures prepared, I just need to fill in the main function.
-   Code needs to have unit tests, proving it works.

If this sounds like something you’d want to spend a weekend or two on,
lemme know.  I’m willing to pay a grand or so.  I’m also willing to
donate that grand to the PostgreSQL USA organization.  Please reply to
me privately (mike at kitchenpc dot com) so we don’t clutter up this
list with an off-topic thread.  Thanks!!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Blatant Alias Abuse] Anyone want to help out with a coding problem?

2010-06-17 Thread Martin Gainty

keeping anything in RAM means your data will be lost forever when the session 
is lost..(then again if the data is garbage to begin with then throwing away 
garbage is no problem)

if your data is NOT garbage then you're going to want to persist the data 
..look at oracle/mysql and yes even SQL-server if you have to use ODBC instead 
of JDBC

 

Mono is a wreck..i've written to the mormon is at novell multiple times to 
try to get a build that works and yet to hear any semicompetent/coherent 
response from any of them

be prepared to dedicate yourself to a minimum of 6 months of effort.. just to 
get mono working 

 

C# only works on MS .NOTYET frameworks..Microsofts proprietary container called 
the GAC accepts MSIL assemblies created by a .NOTYET language such as C#/VB.Yet

but wait a minute you're only half way there..to make the assembly run on 
anything other than .NET framework you'll need to compile into a standalone 
native dll..another step in the process

 

apache is opensource and C# is not.. so if your objective is to shoehorn your 
.NOTYET native dlls into apache you will need to craft a mod_DOTNOTYET 
module..this in itself is not trivial as you'll have to cast away any 
MS/DOTNOTYET specific datatypes/methods/namespaces/MS_garbage and put 
everything into 

primitive datatype string format..implementing simple methods without using 
complex collections/ or complex-structures and namespaces AND XSDs that apache 
can resolve

 

finally deploying to linux,solaris,aix,hpux or bsd which I believe is your 
final goal will be out of reach until you have successfully:

1)untangled the hopelessly entangled and non-operational mono environment into 
a working building environment to provide an operational .NET framework on the 
nix box
2)shoehorn your native dlls to into a mod_DOTNETYET module configuration to be 
accpeted by Apache..BTW this is not trivial..if apache sees any .NOTYET 
specific structs 
it will gpf the server

 

FUNDING: perhaps writing recipes to feed the contractors working for BP will 
make available millions of dollars you will need to fund this effort?

 

FYI: accomplishing this in either an native Apache or J2EE environment could be 
accomplished in 1/4 time and budget that your proposed architecture will use

 

*your call*

Martin 
__ 
Verzicht und Vertraulichkeitanmerkung

Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.

 

 Date: Thu, 17 Jun 2010 04:49:14 -0700
 Subject: [GENERAL] [Blatant Alias Abuse] Anyone want to help out with a 
 coding problem?
 From: m...@kitchenpc.com
 To: pgsql-general@postgresql.org
 
 Hi all – I apologize for the misuse of this alias, but I figured I
 have the audience that might be interested in helping me out with a
 rather difficult coding problem. In fact, if I can find someone who’s
 looking for some part-time work and is interested in a bit of extra
 cash, then it’ll be worth it.
 
 First off, a friend and I are starting a little dot-com here in
 Seattle that focuses on recipes and meal planning. We’re building on
 a typical LAMP stack (Linux, Apache, Mono, Postgres – why what did you
 think I meant?) so we need someone with those skills as well as pretty
 serious computer science skills (probably Masters of PhD).
 
 Here’s the problem. Imagine you have a set of one or more recipes.
 Each recipe requires one or more ingredients along with an amount of
 that ingredient. You also have a given user, who has an available set
 of ingredients and amounts in their inventory. The job is to find the
 most efficient set of n recipes you can make with the given
 ingredients, resulting in the fewest number of leftover amounts.
 Sounds like your typical NP-Complete “multiple-knapsack” problem,
 right? I’ll make it a bit more complicated now. This user also has a
 set of “ratings” associated with zero or more recipes (0 stars, user
 hates the recipe, 5 stars, they love it). One of the parameters of
 the function is to take a sliding scale (between 1 and 10) that
 indicates how much their ratings will play into the recipes the
 function chooses. A scale of zero will tell the algorithm to ignore
 their ratings and pick the most efficient set. A value of ten will
 try to use their available ingredients if possible, but worry more
 about picking recipes they’re probable to like. A default setting of
 5 would pick some sort of even balance.
 
 Here’s the data you have to work with:
 
 - All recipes will be in RAM, no database access required. The
 recipes contain only a guid, a list of required ingredients and
 amounts, and a public 

Re: [GENERAL] [Blatant Alias Abuse] Anyone want to help out with a coding problem?

2010-06-17 Thread Mike Christensen
Regarding the RAM thing, all the recipes will be in the database but
when the engine loads, it will create a graph in memory of all the
recipes.  They'll actually be linked by similarity, recipe A will have
an array of pointers to other recipes that someone who likes recipe A
might also like.  Though this design may be rethought if it doesn't
work out.  The data in RAM is read-only and will be updated with
fresh DB data once every day or so.  The reason I need to keep the
data in RAM is I need to be able to very quickly walk this graph and a
relational DB doesn't really fit for this.  I've done some testing and
if I store about 100,000 recipes in memory, it's only a few hundred
megs of RAM.  This is nothing for a web server, and even if I need
more, this facility could be moved over to a dedicated server that the
main website just queues up requests on.

Regarding Mono, I'm not sure it's a wreck.  It's come a long way and
several sites use it in production.  However, I'm keeping my code
runnable on both .NET/Windows and Mono/Apache because I haven't yet
reached any definitive conclusions on what it will be run on.  My code
is 100% managed, there are no native DLLs as you mention.  I've
actually ported my site over to Mono as a proof of concept and it took
one evening, not 6 mon.  I don't really think your rant on MS/.NET is
really warranted here.  To each is own, if you're a Java guy then
that's fantastic.  I'm not much into it and prefer C#, especially
since I have 10 years of experience with it and zero with Java.  I'm
well aware of the requirements of .NET, Mono, Apache, etc.  Thanks for
your feedback.

Mike

On Thu, Jun 17, 2010 at 5:57 AM, Martin Gainty mgai...@hotmail.com wrote:
 keeping anything in RAM means your data will be lost forever when the
 session is lost..(then again if the data is garbage to begin with then
 throwing away garbage is no problem)
 if your data is NOT garbage then you're going to want to persist the data
 ..look at oracle/mysql and yes even SQL-server if you have to use ODBC
 instead of JDBC

 Mono is a wreck..i've written to the mormon is at novell multiple times
 to try to get a build that works and yet to hear
 any semicompetent/coherent response from any of them
 be prepared to dedicate yourself to a minimum of 6 months of effort.. just
 to get mono working

 C# only works on MS .NOTYET frameworks..Microsofts proprietary container
 called the GAC accepts MSIL assemblies created by a .NOTYET language such as
 C#/VB.Yet
 but wait a minute you're only half way there..to make the assembly run on
 anything other than .NET framework you'll need to compile into a standalone
 native dll..another step in the process

 apache is opensource and C# is not.. so if your objective is to
 shoehorn your .NOTYET native dlls into apache you will need to craft a
 mod_DOTNOTYET module..this in itself is not trivial as you'll have to cast
 away any MS/DOTNOTYET specific datatypes/methods/namespaces/MS_garbage and
 put everything into
 primitive datatype string format..implementing simple methods
 without using complex collections/ or complex-structures and namespaces AND
 XSDs that apache can resolve

 finally deploying to linux,solaris,aix,hpux or bsd which I believe is your
 final goal will be out of reach until you have successfully:
 1)untangled the hopelessly entangled and non-operational mono environment
 into a working building environment to provide an operational .NET framework
 on the nix box
 2)shoehorn your native dlls to into a mod_DOTNETYET module configuration
 to be accpeted by Apache..BTW this is not trivial..if apache sees any
 .NOTYET specific structs
 it will gpf the server

 FUNDING: perhaps writing recipes to feed the contractors working for BP will
 make available millions of dollars you will need to fund this effort?

 FYI: accomplishing this in either an native Apache or J2EE environment could
 be accomplished in 1/4 time and budget that your proposed architecture will
 use

 *your call*
 Martin
 __
 Verzicht und Vertraulichkeitanmerkung

 Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene
 Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte
 Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht
 dient lediglich dem Austausch von Informationen und entfaltet keine
 rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von
 E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen.


 Date: Thu, 17 Jun 2010 04:49:14 -0700
 Subject: [GENERAL] [Blatant Alias Abuse] Anyone want to help out with a
 coding problem?
 From: m...@kitchenpc.com
 To: pgsql-general@postgresql.org

 Hi all – I apologize for the misuse of this alias, but I figured I
 have the audience that might be interested in helping me out with a
 rather difficult coding problem. In fact, if I can find someone who’s
 looking for some part-time work and is interested in a bit of extra
 

Re: [GENERAL] Given N, finding the interval of N hours with max(sum(..))

2010-06-17 Thread Leonardo F
I'm trying to make a query that, given N and a date, gives me the interval of 
N hours with the max(sum(...)). 

select sum(i) as s, timestamp '2010-06-16 00:00:00' + extract(hour from 
d)::integer/3*3 * interval '1 hour' as sumd from p group by extract(hour from 
d)::integer/3 where d = '2010-06-16 00:00:00' order by s desc limit 1;

is this what you're looking for?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] oid data types mapping in libpq functions

2010-06-17 Thread Adrian Klaver
On Wednesday 16 June 2010 7:42:02 pm zhong ming wu wrote:
 Dear List

 Where can I find this mapping of oid to pg data types mentioned in
 libpq documentation?
 Why is such information not mentioned in the documentation?  A general
 knowledge?

 Thanks

 Mr Wi

I think you are looking for the table pg_type.

SELECT oid,typname from pg_type;

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Merlin Moncure
On Thu, Jun 17, 2010 at 4:47 AM, Felde Norbert feno...@gmail.com wrote:
 Before I began I made a filesystem level backup after I stopped the
 postgres service.
 I have the original 0003 file, the size is 204800, The size of the
 other files in this dir is 262144.

hm...any indication of why the file is small? run out of space?
filesystem/os issues?

try appending hex 0x55 to the end of the file until it's the right
length and starting up the database. disable everything (don't vacuum)
and immediately try to pull off a dump.

what's the last good backup you have?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] oid data types mapping in libpq functions

2010-06-17 Thread Merlin Moncure
On Wed, Jun 16, 2010 at 10:42 PM, zhong ming wu mr.z.m...@gmail.com wrote:
 Dear List

 Where can I find this mapping of oid to pg data types mentioned in
 libpq documentation?
 Why is such information not mentioned in the documentation?  A general
 knowledge?

curious: what do you need the oids for?

built in type oids are defined in pg_type.h:
cat src/include/catalog/pg_type.h | grep OID | grep define

built in type oids don't change. you can pretty much copy/pasto the
output of above into an app...just watch out for some types that may
not be in older versions.

user defined type oids (tables, views, composite types, enums, and
domains) have an oid generated when it is created.  since that oid can
change via ddl so you should look it up by name at appropriate times.

if you want to be completely abstracted from the type oids, look here:
http://libpqtypes.esilo.com/

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Merlin Moncure
On Thu, Jun 17, 2010 at 10:35 AM, Merlin Moncure mmonc...@gmail.com wrote:
 On Thu, Jun 17, 2010 at 4:47 AM, Felde Norbert feno...@gmail.com wrote:
 Before I began I made a filesystem level backup after I stopped the
 postgres service.
 I have the original 0003 file, the size is 204800, The size of the
 other files in this dir is 262144.

 hm...any indication of why the file is small? run out of space?
 filesystem/os issues?

 try appending hex 0x55 to the end of the file until it's the right
 length and starting up the database. disable everything (don't vacuum)
 and immediately try to pull off a dump.

 what's the last good backup you have?

if you want to move this along faster and get some more minds on the
problem, hop onto postgresql irc channel:
http://www.postgresql.org/community/irc

also make sure not to modifiy your original filesystem snapshot.
always modify the copy first.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] weird sortorder issue

2010-06-17 Thread Andreas

Hi,
I've got a produktive pg-server V8.4 on SuSE and my dev-server on winxp 
localhost.


The 2 servers produce different results in text sorting.
The local server sorts as expected but the productive db produces 
unexpected orders.
They have identical data. That the testbox has daily backups of the 
productive db.


E.g. there is a table that holds projects.

A   select project from projects order by project;   delivers on 
productive-db

StatStuff
ST Evo
STP
ST Polithings
Syback
How can the STP be between the two STs ?
Actually one should expect the two STs should appear as the first 
projects with S because ''  'A'.


Another example is our  Sales project. I made the blank in front to 
get it always on the top of the list.
This works with PG on windows but PG on SUSE ignores the leading blank 
and puts sales among the other projects that start with S.


The DB on WinXP isENCODING = 'UTF8'
   LC_COLLATE = 'German, Germany'
   LC_CTYPE = 'German, Germany'

The DB on SUSE has ENCODING = 'UTF8'
   LC_COLLATE = 'de_DE.UTF-8'
   LC_CTYPE = 'de_DE.UTF-8'

There is no de_DE.UTF-8 on Windows and no German, Germany on SUSE so I 
expect them as equal?


Can someone point me to the right server option that controls the sort 
order to get the SUSE server right?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Wrong number of empty toast pages

2010-06-17 Thread Tom Lane
Jacek Trzmiel sc...@hot.pl writes:
 I have a problem with growing toast table in pgsql 8.3.5.  Probably 
 raising max_fsm_pages and running autovacuum more aggresively will fix 
 it, but during investigation I've got this puzzling output from VACUUM 
 ANALYZE VERBOSE:
 INFO:  pg_toast_16505: found 756385 removable, 23003704 nonremovable row 
 versions in 28446864 pages
 DETAIL:  14586 dead row versions cannot be removed yet.
 There were 92404502 unused item pointers.
 22769942 pages contain useful free space.
 42 pages are entirely empty.

 So there are 23M rows in 28.5M pages.  Assuming worst case of 1 row per 
 page, there should be 5.5M completely free pages.  But vacuum says that 
 only 42 are entriely empty.   Am I missing something obvious here?

In this context entirely empty suggests that the page not only doesn't
contain any rows, but never has contained any rows --- because it hasn't
got any unused item pointers either.  This is an unusual situation,
which is why VACUUM troubles to point it out.

You've definitely got a bad bloat problem there, but the numbers don't
seem inherently inconsistent.  Adding removable+nonremovable+unused
gives 116164591 which is just about 4 times the number of pages, so
it appears that the toast stuff is packing 4 rows per page as expected.
But an awful lot of them have been deleted over time without the space
getting re-used, which implies that the FSM is too small to remember
the space.  Also, the fact that so many rows were removable this time
suggests it's been too long since the last vacuum.  So your plan sounds
about right.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about weird COPY syntax

2010-06-17 Thread Evan Carroll
While asking questions about the grammar of COPY why do we have a
bracket of `[ [ WITH ] ...` rather than `[ WITH ]` ...

COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ]   (why is this opening bracket that is never closed here.)
[ BINARY ]
[ OIDS ]
[ DELIMITER [ AS ] 'delimiter' ]
[ NULL [ AS ] 'null string' ]
[ -PERL_key = value ]
[ -PERL_key ]
[ CSV [ HEADER ]
[ QUOTE [ AS ] 'quote' ]
[ ESCAPE [ AS ] 'escape' ]
[ FORCE NOT NULL column [, ...] ]
[ -TCSV_key ]
[ -TCSV_key = value ]

-- 
Evan Carroll
System Lord of the Internets
http://www.evancarroll.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about weird COPY syntax

2010-06-17 Thread Evan Carroll
Why is it that DELIMITER, and NULL options are under [WITH] but
not under [CSV]. This seems at least slightly awkward.

Command: COPY
Description: copy data between a file and a table
Syntax:
COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ]
  [ BINARY ]
  [ OIDS ]
  [ DELIMITER [ AS ] 'delimiter' ]
  [ NULL [ AS ] 'null string' ]
  [ CSV [ HEADER ]-
Why not put them under this.
[ QUOTE [ AS ] 'quote' ]
[ ESCAPE [ AS ] 'escape' ]
[ FORCE NOT NULL column [, ...] ]

-- 
Evan Carroll
System Lord of the Internets
http://www.evancarroll.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about weird COPY syntax

2010-06-17 Thread Tom Lane
Evan Carroll m...@evancarroll.com writes:
 While asking questions about the grammar of COPY why do we have a
 bracket of `[ [ WITH ] ...` rather than `[ WITH ]` ...

Somebody left out the closing brackets in the documentation, evidently.
Will fix.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about weird COPY syntax

2010-06-17 Thread Tom Lane
Evan Carroll m...@evancarroll.com writes:
 Why is it that DELIMITER, and NULL options are under [WITH] but
 not under [CSV]. This seems at least slightly awkward.

They're not specific to CSV.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Working with pages of data (LIMIT/OFFSET keyword)

2010-06-17 Thread Francisco Figueiredo Jr.
I would go with 2).

Npgsql supports multiple resultsets. You can pass a query separated by
semicolon ';'  or you can use a procedure call which return a setof
refcursor.

On both ways, you will need to call NextResult in your Datareader just
like with SQLServer.

You can check our user manual: http://manual.npgsql.org to get more
info about how to use refcursors

I hope it helps.

Please, let me know if you need more info.



On Wed, Jun 16, 2010 at 16:44, Szymon Guz mabew...@gmail.com wrote:


 2010/6/16 Mike Christensen m...@kitchenpc.com

 I'm generating a query on the fly to return a set of data, however I
 only want to display 30 rows at a time to the user.  For this reason,
 I use the LIMIT 30 OFFSET x clause on the select statement.  However,
 I also want to know the total rows that match this query had there
 been no limit, that way I can display to the user the total count and
 the number of pages, and have Next/Prev buttons in my UI.  I can think
 of the following ways to do this:

 1) When the page loads, execute two totally separate queries.  One
 that does the COUNT, and then another query immediately after to get
 the desired page of data.  I don't like this as much because the two
 queries will execute in separate transactions and it'd be nice if I
 could just perform a single SQL query and get all this information at
 once.  However, I will use this if there's no other way.


 Just run them in one transaction.
 You can also just show the Next/Prev buttons and then do something just for
 the case where there is no data.
 Or use LIMIT 31 so you always know that there is the next page with at least
 one record.
 regards
 Szymon Guz



-- 
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://fxjr.blogspot.com
http://twitter.com/franciscojunior

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] XML - DOCTYPE element - documentation suggestion

2010-06-17 Thread Craig Ringer
Hi all

I've been working with XML storage in Pg and was puzzled by the fact
that Pg appears to refuse to store a document with a DOCTYPE declaration
- it was interpreting it as a regular element and rejecting it.

This turns out to be because Pg parses XML as a fragment (ie option
CONTENT) when casting, and XML fragments cannot have a doctype.
Unfortunately the error is ... unhelpful ... and the documentation
neglects to mention this issue. Hence my post.

I didn't see anything about this in the FAQ or in the docs for the XML
datatype
(http://www.postgresql.org/docs/current/interactive/datatype-xml.html)
and was half-way through writing this post when I found a helpful
message on the list:

 http://www.mail-archive.com/pgsql-general@postgresql.org/msg119713.html

that hinted the way. Even then it took me a while to figure out that you
can't specify DOCUMENT or CONTENT on the XML type its self, but must
specify it while parsing instead and use a CHECK constraint if you want
to require storage of whole documents in a field.

The xml datatype documentation should probably mention that whole
documents must be loaded with an XMLPARSE(DOCUMENT 'doc_text_here), they
cannot just be cast from text to xml as happens when you pass an xml
document as text to a parameter during an INSERT. This should probably
appear under CREATING XML VALUES in:

  http://www.postgresql.org/docs/current/static/datatype-xml.html

... and probably deserves mention in a new CAVEATS or NOTES section
too, as it' *will* catch people out even if they R TFM.



I'd expect this to work:

CREATE TABLE test_xml ( doc xml );

INSERT INTO test_xml ( doc ) VALUES (
$$?xml version=1.0 encoding=utf-8?
!DOCTYPE test SYSTEM 'test.dtd'testdummy content/test$$
);

... but it fails with:


ERROR:  invalid XML content
LINE 2: $$?xml version=1.0 encoding=utf-8?
^
DETAIL:  Entity: line 2: parser error : StartTag: invalid element name
!DOCTYPE test SYSTEM 'test.dtd'testdummy content/test
 ^


though xmllint (from libxml) is quite happy with the document. This had
me quite confused for a while.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Tom Lane
Felde Norbert feno...@gmail.com writes:
 The message is the same for the original pg_clog/0003, the 0003
 containing binary 0 and after pg_resetxlog:
 pg_dump: Error message from server: ERROR:  could not access status of
 transaction 3974799
 DETAIL:  Could not read from file pg_clog/0003 at offset 204800: No error.
 pg_dump: The command was: COPY public.active_sessions_split (ct_sid,
 ct_name, ct_pos, ct_val, ct_changed) TO stdout;
 pg_dump: *** aborted because of error

 If create the bigger 0003 containing 0 than I get that:
 pg_dump: Error message from server: ERROR:  xlog flush request
 0/A19F5BF8 is not satisfied --- flushed only to 0/A02A1AC8
 CONTEXT:  writing block 1149 of relation 1663/4192208/4192508
 pg_dump: The command was: COPY public.history (historyid, adatkod,
 elemid, userid, ido, actionid, targyid, szuloid, opvalue, longfield,
 longtext) TO stdout;
 pg_dump: *** aborted because of error

I'm afraid this means you're screwed :-(.  Both of those symptoms imply
that one part of the database storage is out of sync with another part:
the first error says there are transaction IDs in the
active_sessions_split table that don't exist in pg_clog, and the second
error says that there are pages in the history table that were last
updated by WAL records that don't exist in pg_xlog.  If there are two
such errors, there are probably more.

You weren't too specific about how you got into this state, but I
suppose that it must have been a system crash or power failure.  Even
then, you would not have gotten burnt if the filesystem and hardware
did what they're supposed to do.  I suspect you have a setup wherein
fsync() calls aren't being honored properly.  You may need to disable
write caching on your disks, and/or switch to another filesystem or OS.
(Personally I'd never run a database I cared about on Windows.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Dann Corbit
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Tom Lane
 Sent: Thursday, June 17, 2010 11:54 AM
 To: Felde Norbert
 Cc: pgsql-general@postgresql.org; Scott Marlowe
 Subject: Re: [GENERAL] postgres crash SOS
 
 Felde Norbert feno...@gmail.com writes:
  The message is the same for the original pg_clog/0003, the 0003
  containing binary 0 and after pg_resetxlog:
  pg_dump: Error message from server: ERROR:  could not access status
 of
  transaction 3974799
  DETAIL:  Could not read from file pg_clog/0003 at offset 204800: No
 error.
  pg_dump: The command was: COPY public.active_sessions_split (ct_sid,
  ct_name, ct_pos, ct_val, ct_changed) TO stdout;
  pg_dump: *** aborted because of error
 
  If create the bigger 0003 containing 0 than I get that:
  pg_dump: Error message from server: ERROR:  xlog flush request
  0/A19F5BF8 is not satisfied --- flushed only to 0/A02A1AC8
  CONTEXT:  writing block 1149 of relation 1663/4192208/4192508
  pg_dump: The command was: COPY public.history (historyid, adatkod,
  elemid, userid, ido, actionid, targyid, szuloid, opvalue, longfield,
  longtext) TO stdout;
  pg_dump: *** aborted because of error
 
 I'm afraid this means you're screwed :-(.  Both of those symptoms imply
 that one part of the database storage is out of sync with another part:
 the first error says there are transaction IDs in the
 active_sessions_split table that don't exist in pg_clog, and the second
 error says that there are pages in the history table that were last
 updated by WAL records that don't exist in pg_xlog.  If there are two
 such errors, there are probably more.
 
 You weren't too specific about how you got into this state, but I
 suppose that it must have been a system crash or power failure.  Even
 then, you would not have gotten burnt if the filesystem and hardware
 did what they're supposed to do.  I suspect you have a setup wherein
 fsync() calls aren't being honored properly.  You may need to disable
 write caching on your disks, and/or switch to another filesystem or OS.
 (Personally I'd never run a database I cared about on Windows.)

Somehow, I doubt that Windows is to blame.  For instance, Oracle and SQL*Server 
seem to run fine on Windows without this sort of problem.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Tom Lane
Dann Corbit dcor...@connx.com writes:
 (Personally I'd never run a database I cared about on Windows.)

 Somehow, I doubt that Windows is to blame.  For instance, Oracle and 
 SQL*Server seem to run fine on Windows without this sort of problem.

Really?  Are you front-line support for either, so that you can say
that they never have this sort of problem on the basis of any actual
evidence?  Our failure rate on Windows is probably pretty low too,
it's just that we only hear about non-working cases here.

(Perhaps more to the point, if they don't have problems, it's likely
because they tell their customers how to configure Windows boxes safely
before the fact.  And people who are spending the money for an Oracle
license will heed that advice.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Scott Marlowe
On Thu, Jun 17, 2010 at 1:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 (Perhaps more to the point, if they don't have problems, it's likely
 because they tell their customers how to configure Windows boxes safely
 before the fact.  And people who are spending the money for an Oracle
 license will heed that advice.)

The only advice I've ever gotten from Oracle regarding running it on
Windows was don't do that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Dann Corbit
 -Original Message-
 From: Tom Lane [mailto:t...@sss.pgh.pa.us]
 Sent: Thursday, June 17, 2010 12:20 PM
 To: Dann Corbit
 Cc: Felde Norbert; pgsql-general@postgresql.org; Scott Marlowe
 Subject: Re: [GENERAL] postgres crash SOS
 
 Dann Corbit dcor...@connx.com writes:
  (Personally I'd never run a database I cared about on Windows.)
 
  Somehow, I doubt that Windows is to blame.  For instance, Oracle and
 SQL*Server seem to run fine on Windows without this sort of problem.
 
 Really?  Are you front-line support for either, so that you can say
 that they never have this sort of problem on the basis of any actual
 evidence?  Our failure rate on Windows is probably pretty low too,
 it's just that we only hear about non-working cases here.

I have worked as a DBA for large farms of database systems.
 
 (Perhaps more to the point, if they don't have problems, it's likely
 because they tell their customers how to configure Windows boxes safely
 before the fact.  And people who are spending the money for an Oracle
 license will heed that advice.)
 

Care was taken in selection of hardware and configuration.  Is this different 
for any database system?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Joshua D. Drake
On Thu, 2010-06-17 at 19:27 +, Dann Corbit wrote:
  -Original Message-
  From: Tom Lane [mailto:t...@sss.pgh.pa.us]
  Sent: Thursday, June 17, 2010 12:20 PM
  To: Dann Corbit
  Cc: Felde Norbert; pgsql-general@postgresql.org; Scott Marlowe
  Subject: Re: [GENERAL] postgres crash SOS
  
  Dann Corbit dcor...@connx.com writes:
   (Personally I'd never run a database I cared about on Windows.)
  
   Somehow, I doubt that Windows is to blame.  For instance, Oracle and
  SQL*Server seem to run fine on Windows without this sort of problem.
  
  Really?  Are you front-line support for either, so that you can say
  that they never have this sort of problem on the basis of any actual
  evidence?  Our failure rate on Windows is probably pretty low too,
  it's just that we only hear about non-working cases here.
 
 I have worked as a DBA for large farms of database systems.

I think this argument as a whole represents so many What ifs it is not
even worth having.

Fact: Windows Server is a decent OS for databases
Caveat: If properly managed
Wish: A decent shell to work in (although powershell is interesting)

Big fark of a caveat there. Of course the same caveat applies to Linux
or FreeBSD too.

  
  (Perhaps more to the point, if they don't have problems, it's likely
  because they tell their customers how to configure Windows boxes safely
  before the fact.  And people who are spending the money for an Oracle
  license will heed that advice.)
  
 
 Care was taken in selection of hardware and configuration.  Is this different 
 for any database system?

Yes :) but does it matter?

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Tom Lane
Joshua D. Drake j...@commandprompt.com writes:
 On Thu, 2010-06-17 at 19:27 +, Dann Corbit wrote:
 (Perhaps more to the point, if they don't have problems, it's likely
 because they tell their customers how to configure Windows boxes safely
 before the fact.  And people who are spending the money for an Oracle
 license will heed that advice.)
 
 Care was taken in selection of hardware and configuration.  Is this 
 different for any database system?

 Yes :) but does it matter?

Sure it matters.  The people who are complaining are those who dropped
PG onto some el-cheapo generic PC or other and didn't spend any time
inquiring into things like write cache settings, if indeed they even
know what those are.

It's possible that SQL Server can survive in that kind of environment
because it knows about undocumented Windows hooks that allow it to force
the right things to happen even in a badly-configured machine.  It's
also possible that SQL Server gets just as badly hosed as we do.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Merlin Moncure
On Thu, Jun 17, 2010 at 2:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 You weren't too specific about how you got into this state, but I
 suppose that it must have been a system crash or power failure.  Even
 then, you would not have gotten burnt if the filesystem and hardware
 did what they're supposed to do.  I suspect you have a setup wherein
 fsync() calls aren't being honored properly.  You may need to disable
 write caching on your disks, and/or switch to another filesystem or OS.
 (Personally I'd never run a database I cared about on Windows.)

Although I don't run pg/windows anymore, I did for years and I think
you're right -- I bet there was a hard reset event and either fsync
was off or not being honored properly.  I was communicating with the
OP off list to see if there was any evidence of disk full condition
(no response yet).   I never saw any data corruption in line with this
w/o some external trigger.

In regards to windows/pg generally, I don't think it's all that bad.
The windows NT kernel is absolutely rock solid stable (quite frankly,
moreso than linux IMO) and very well supported in terms of hardaware.
otoh, windows kernel configuration is wacky, the shell sucks, insanely
overdesigned security model, posix not really supported, etc.  It's a
mixed bag for sure.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Dann Corbit
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Merlin Moncure
 Sent: Thursday, June 17, 2010 12:56 PM
 To: Tom Lane
 Cc: Felde Norbert; pgsql-general@postgresql.org; Scott Marlowe
 Subject: Re: [GENERAL] postgres crash SOS
 
 On Thu, Jun 17, 2010 at 2:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  You weren't too specific about how you got into this state, but I
  suppose that it must have been a system crash or power failure.  Even
  then, you would not have gotten burnt if the filesystem and hardware
  did what they're supposed to do.  I suspect you have a setup wherein
  fsync() calls aren't being honored properly.  You may need to disable
  write caching on your disks, and/or switch to another filesystem or
 OS.
  (Personally I'd never run a database I cared about on Windows.)
 
 Although I don't run pg/windows anymore, I did for years and I think
 you're right -- I bet there was a hard reset event and either fsync
 was off or not being honored properly.  I was communicating with the
 OP off list to see if there was any evidence of disk full condition
 (no response yet).   I never saw any data corruption in line with this
 w/o some external trigger.
 
 In regards to windows/pg generally, I don't think it's all that bad.
 The windows NT kernel is absolutely rock solid stable (quite frankly,
 moreso than linux IMO) and very well supported in terms of hardaware.
 otoh, windows kernel configuration is wacky, the shell sucks, insanely
 overdesigned security model, posix not really supported, etc.  It's a
 mixed bag for sure.

I think that the bottom line is that {like any DBMS system} effort is needed to 
make PostgreSQL run smoothly on any platform, but for Windows support is still 
fairly new, while on POSIX systems all the kinks have had plenty of time to get 
worked out.  So it should not be unexpected that there will be more problems 
seen on Windows platforms than on POSIX type platforms.  Even so, I think that 
the progress on Windows has been excellent and that PostgreSQL is definitely a 
viable alternative to traditional commercial systems.  In all cases (Windows or 
other alternatives) we should use thoroughly tested and carefully configured 
systems to store our business data.

Nothing new or remarkable here.  Just common sense.
IMO-YMMV

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Felde Norbert
The first error message was what I got after postgres crashed and I
tried to make a dump, run vacuum or tried somthing else.
The second message I got when I tried to repaire the problem, so it
dous not matter because I did something wrong i see.

If I could choose I would use a linux server too, but if the partner
sais there is a windows server and you have to use that than there is
no discuss.

Why I was not specific how to this state came is I do not know.
I could not find anything about a power failer and disk space seemed
to be more than needed. There was entries in log for full virtual
memory.



2010/6/17 Tom Lane t...@sss.pgh.pa.us:
 Felde Norbert feno...@gmail.com writes:
 The message is the same for the original pg_clog/0003, the 0003
 containing binary 0 and after pg_resetxlog:
 pg_dump: Error message from server: ERROR:  could not access status of
 transaction 3974799
 DETAIL:  Could not read from file pg_clog/0003 at offset 204800: No error.
 pg_dump: The command was: COPY public.active_sessions_split (ct_sid,
 ct_name, ct_pos, ct_val, ct_changed) TO stdout;
 pg_dump: *** aborted because of error

 If create the bigger 0003 containing 0 than I get that:
 pg_dump: Error message from server: ERROR:  xlog flush request
 0/A19F5BF8 is not satisfied --- flushed only to 0/A02A1AC8
 CONTEXT:  writing block 1149 of relation 1663/4192208/4192508
 pg_dump: The command was: COPY public.history (historyid, adatkod,
 elemid, userid, ido, actionid, targyid, szuloid, opvalue, longfield,
 longtext) TO stdout;
 pg_dump: *** aborted because of error

 I'm afraid this means you're screwed :-(.  Both of those symptoms imply
 that one part of the database storage is out of sync with another part:
 the first error says there are transaction IDs in the
 active_sessions_split table that don't exist in pg_clog, and the second
 error says that there are pages in the history table that were last
 updated by WAL records that don't exist in pg_xlog.  If there are two
 such errors, there are probably more.

 You weren't too specific about how you got into this state, but I
 suppose that it must have been a system crash or power failure.  Even
 then, you would not have gotten burnt if the filesystem and hardware
 did what they're supposed to do.  I suspect you have a setup wherein
 fsync() calls aren't being honored properly.  You may need to disable
 write caching on your disks, and/or switch to another filesystem or OS.
 (Personally I'd never run a database I cared about on Windows.)

                        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres crash SOS

2010-06-17 Thread Merlin Moncure
On Thu, Jun 17, 2010 at 4:51 PM, Felde Norbert feno...@gmail.com wrote:
 The first error message was what I got after postgres crashed and I
 tried to make a dump, run vacuum or tried somthing else.
 The second message I got when I tried to repaire the problem, so it
 dous not matter because I did something wrong i see.

 If I could choose I would use a linux server too, but if the partner
 sais there is a windows server and you have to use that than there is
 no discuss.

 Why I was not specific how to this state came is I do not know.
 I could not find anything about a power failer and disk space seemed
 to be more than needed. There was entries in log for full virtual
 memory.

This came before the crash?  Are you sure the server didn't reset
following the virtual memory full?

Memory full is a very dangerous condition for a database server and
may have contributed to your problem or been a symptom of another
problem.  The main things we need to know (any data corruption issue
is worth trying to diagnose after the fact) are:

*) what is the setting for fsync?
*) Are you using a raid controller?  how is the cache configured?
*) If not, is your drive configured to buffer writes?
*) How much free space is left on your various volumes on the computer?

Did you check the system event log for interesting events at or around
the time you saw virtual memory full.  Can we see the log message
reporting memory full condition as well as surrounding messages?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Working with pages of data (LIMIT/OFFSET keyword)

2010-06-17 Thread Mike Christensen
Sweet!  I knew there was a way to do this, I'll mess around with it
more in a bit.

On Thu, Jun 17, 2010 at 10:38 AM, Francisco Figueiredo Jr.
franci...@npgsql.org wrote:
 I would go with 2).

 Npgsql supports multiple resultsets. You can pass a query separated by
 semicolon ';'  or you can use a procedure call which return a setof
 refcursor.

 On both ways, you will need to call NextResult in your Datareader just
 like with SQLServer.

 You can check our user manual: http://manual.npgsql.org to get more
 info about how to use refcursors

 I hope it helps.

 Please, let me know if you need more info.



 On Wed, Jun 16, 2010 at 16:44, Szymon Guz mabew...@gmail.com wrote:


 2010/6/16 Mike Christensen m...@kitchenpc.com

 I'm generating a query on the fly to return a set of data, however I
 only want to display 30 rows at a time to the user.  For this reason,
 I use the LIMIT 30 OFFSET x clause on the select statement.  However,
 I also want to know the total rows that match this query had there
 been no limit, that way I can display to the user the total count and
 the number of pages, and have Next/Prev buttons in my UI.  I can think
 of the following ways to do this:

 1) When the page loads, execute two totally separate queries.  One
 that does the COUNT, and then another query immediately after to get
 the desired page of data.  I don't like this as much because the two
 queries will execute in separate transactions and it'd be nice if I
 could just perform a single SQL query and get all this information at
 once.  However, I will use this if there's no other way.


 Just run them in one transaction.
 You can also just show the Next/Prev buttons and then do something just for
 the case where there is no data.
 Or use LIMIT 31 so you always know that there is the next page with at least
 one record.
 regards
 Szymon Guz



 --
 Regards,

 Francisco Figueiredo Jr.
 Npgsql Lead Developer
 http://www.npgsql.org
 http://fxjr.blogspot.com
 http://twitter.com/franciscojunior


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Excessive Deadlocks On Concurrent Inserts to Shared Parent Row

2010-06-17 Thread Marvin S. Addison
We're using PostgreSQL to back tickets in CAS, http://www.jasig.org/cas, 
in a large university environment where the ephemeral nature of CAS 
tickets places a high concurrent read/write workload on the database. 
We recently switched platforms from Oracle to PG and saw a dramatic 
increase in deadlocks for the following workflow executed concurrently:


 1. Begin transaction
 2. Create ticket (INSERT INTO SERVICETICKET...)
 3. Update parent ticket-granting ticket (UPDATE TICKETGRANTINGTICKET...)
 4. Commit

The following postgres logs provide a detailed example of two deadlocked 
transactions:


=
BLOCKED Transaction
=
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_creat (OID 957)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_open (OID 952)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lowrite (OID 955)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_close (OID 953)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_creat (OID 957)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_open (OID 952)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lowrite (OID 955)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_close (OID 953)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 PARSE LOG:  duration: 0.000 ms  parse unnamed: insert 
into SERVICETICKET (NUMBER_OF_TIMES_USED, CREATION_TIME, 
EXPIRATION_POLICY, LAST_TIME_USED, PREVIOUS_LAST_TIME_USED, 
ticketGrantingTicket_ID, FROM_NEW_LOGIN, TICKET_ALREADY_GRANTED, 
SERVICE, ID) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 BIND LOG:  duration: 0.000 ms  bind unnamed: insert 
into SERVICETICKET (NUMBER_OF_TIMES_USED, CREATION_TIME, 
EXPIRATION_POLICY, LAST_TIME_USED, PREVIOUS_LAST_TIME_USED, 
ticketGrantingTicket_ID, FROM_NEW_LOGIN, TICKET_ALREADY_GRANTED, 
SERVICE, ID) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 BIND DETAIL:  parameters: $1 = '0', $2 = 
'1276614383240', $3 = '3535872', $4 = '1276614383240', $5 = '0', $6 = 
'TGT-198041-jErDe9IINT5toBTOKtgNt1EQ9JUk6pcsvQDgO93UkidCj7WIpp-auth-2', 
$7 = 'f', $8 = 'f', $9 = '3535873', $10 = 
'ST-205046-96V10UhgRMePGqkCAEOo-auth-2'
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 INSERT LOG:  duration: 0.000 ms  execute unnamed: 
insert into SERVICETICKET (NUMBER_OF_TIMES_USED, CREATION_TIME, 
EXPIRATION_POLICY, LAST_TIME_USED, PREVIOUS_LAST_TIME_USED, 
ticketGrantingTicket_ID, FROM_NEW_LOGIN, TICKET_ALREADY_GRANTED, 
SERVICE, ID) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 INSERT DETAIL:  parameters: $1 = '0', $2 = 
'1276614383240', $3 = '3535872', $4 = '1276614383240', $5 = '0', $6 = 
'TGT-198041-jErDe9IINT5toBTOKtgNt1EQ9JUk6pcsvQDgO93UkidCj7WIpp-auth-2', 
$7 = 'f', $8 = 'f', $9 = '3535873', $10 = 
'ST-205046-96V10UhgRMePGqkCAEOo-auth-2'
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_creat (OID 957)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_open (OID 952)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lowrite (OID 955)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_close (OID 953)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_creat (OID 957)
2010-06-15 11:06:23 EDT cas 198.82.164.234 cas_user 4c17957b.5133 
5/696029 3335052 idle in transaction LOG:  duration: 0.000 ms  fastpath 
function call: lo_open (OID 952)

Re: [GENERAL] An Enigma of a weird query problem

2010-06-17 Thread pythonista
Sorry, nevermind, the second query returns the correct results.

 It first resolves the inner query to a single date, and then gets all
records matching that date.

On Jun 17, 6:25 am, pythonista sitecontac...@gmail.com wrote:
 Hi, I'm new:

 This table is called rssitem.   Given the fields:

 str_key(varchar)    sent (bool)   date_to_resend(timestamp)  user_id
 
 45rtuu3433ff  False   2010-06-17 01:50:00-05   58 -- first 2 recs
 trtwweddasa  True     2010-06-17 01:50:00-05   58 -- have same
 datetime
 gtterere  False   2010-06-18 01:50:00-35   58
 dd  False   2010-06-19 01:50:00-45   58

 (The str_key is the primary key. It's actually a unique hash value,
 and it's unrelated to the issue, just explaining.)

 Sooo... First, a query that returns the correct result:

     select min(date_to_resend) from rssitem where sent = False and
 user_id = 58

    Returns:  2010-06-16 05:39:00-05 ,   successfully matching the
 very first record.

 However, this query (which uses the first query in a subquery)

 select str_key, sent,date_to_resend from rssitem where date_to_resend
 IN
       (select min(date_to_resend) from rssitem where sent = False and
 user_id = 58)

 Returns the first record which is correct, but the second record does
 not belong, due to sent = True

 45rtuu3433ff    FALSE   2010-06-17 01:50:00-05    58
 trtwweddasa   TRUE      2010-06-17 01:50:00-05    58

 Might this be a bug?  I can't just wrap my brain around it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UUID/GUID

2010-06-17 Thread Ozz Nixon
I found a page:

http://www.postgresql.org/docs/8.3/static/uuid-ossp.html

Is that the only way to get UUID from postgresql?

* We are in the midst of porting a replication product to support postgresql, 
which relies on UUID/GUID for uniqueness instead of Sequences and Site ID's.

Thanks,
O.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID/GUID

2010-06-17 Thread Joshua D. Drake
On Thu, 2010-06-17 at 20:56 -0400, Ozz Nixon wrote:
 I found a page:
 
   http://www.postgresql.org/docs/8.3/static/uuid-ossp.html
 
 Is that the only way to get UUID from postgresql?
 
 * We are in the midst of porting a replication product to support postgresql, 
 which relies on UUID/GUID for uniqueness instead of Sequences and Site ID's.

No you can use your own library as well (or language library).

Joshua D. Drake


 
 Thanks,
 O.

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Misunderstanding transactions and locks

2010-06-17 Thread Michael P. Soulier
On 16/06/10 Tom Lane did say:

  row exclusive mode does not block itself.
 
 Specifically, see the table in
 http://www.postgresql.org/docs/8.3/static/explicit-locking.html#LOCKING-TABLES
 
 Exclusive lock modes are those that conflict with themselves; to wit,
 SHARE ROW EXCLUSIVE mode or higher.  The mode names are, um, historical.

That explains much.

Thanks,
Mike
-- 
Michael P. Soulier michael_soul...@mitel.com, 613-592-2122 x2522
Any intelligent fool can make things bigger and more complex... It takes a
touch of genius - and a lot of courage to move in the opposite direction.
--Albert Einstein

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general