Re: [HACKERS] missing data/global

2004-08-25 Thread Daniel Kalchev
Tom Lane said:
  Daniel Kalchev [EMAIL PROTECTED] writes:
   (found out 7.2.3 does not have pg_database)
  
  You think not?

Not as a file similar to pg_control. pg_database is indeed table in the system 
catalog.

   By the way, I had to copy over the 'new' files from pg_clog and pg_xlog (t
 his
   is the second possible error) to get the postmaster running.
  
  That was *not* part of the recipe, and is guaranteed *not* to work.

I know that, but wondered if it would help in any way.. By the way, what would 
be the solution to sync WAL with the pg_control contents?

  
  It seems likely though that you are wasting your time --- the index
  failure suggests strongly that you have more corruption than just the
  loss of the /global subdirectory :-(

After spending some time to find possible ways to adjust pointers (could 
eventually save part of the data), I decided to move to plan B, which is to 
have few people manually re-enter the data - would have been more effective to 
waste my time anyway - but not if it will take days and the result be not 
guaranteed to be consistent.

Does such toll exist, that could dump data (records?) from the heap files 
given the table structure?

Regards,
Daniel


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] missing data/global

2004-08-24 Thread Daniel Kalchev
data/base/global is missing and this is where postgres gets all it's startup 
data from (database oids, next oid, transaction id etc).

Let's see how easy to recover from this it will turn to be.

Daniel

Christopher Kings-Lynne said:
  If you're not missing your data dir, clog or xlog then what's the problem?
  
  Daniel Kalchev wrote:
   Hello,
   
   Is there ANY chance to recover data from a database system that suffered d
 isk 
   crash, and is not missing the data/global directory?
   
   Version is 7.2.4. Database files seem to be intact as well as pg_clog and 
   pg_xlog directories.
   
   Thanks in advance for any ideas.
   
   Daniel
   
   
   ---(end of broadcast)---
   TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] missing data/global

2004-08-24 Thread Daniel Kalchev
Tom I did the following:

(found out 7.2.3 does not have pg_database)

1. saved old data etc.

2. created new database, and the database. database oid was 16556;

3. moved data/global to the old data directory.

4. though, that postmaster would actually use the database oid to locate the 
directory, then load everything from there.. old database oid was 77573557, so 
I just linked this to 16556 in the data/base direcotry. (this might be the 
first possible error)

Now I can connect to the 'old' database, but get the error 

FATAL 1:  Index pg_operator_oid_index is not a btree

(if I run postmaster with -P I get not errors, but no tables as well).

By the way, I had to copy over the 'new' files from pg_clog and pg_xlog (this 
is the second possible error) to get the postmaster running. Perhaps better 
would be to use pg_resetxlog or similar?

Daniel

Tom Lane said:
  Daniel Kalchev [EMAIL PROTECTED] writes:
   Is there ANY chance to recover data from a database system that suffered d
 isk
   crash, and is not missing the data/global directory?
   Version is 7.2.4. Database files seem to be intact as well as pg_clog and 
   pg_xlog directories.
  
  The hard part I think would be reconstructing pg_database, because you'd
  need to get the database OIDs right.  I can't think of any way to do
  that that doesn't involve poking at the file with a hex editor.
  
  Here's a sketch of how I'd proceed:
  
  1. Make a tar backup of what you have!  That way you can start over
  after you screw up ;-)
  
  2. I assume you know the names and properties of your databases, users,
  and groups if any; also the SYSID numbers for the users and groups.
  A recent pg_dumpall script would be a good place to get this info.
  
  3. You're also going to need to figure out the OIDs of your databases
  (the OIDs are the same as the names of their subdirectories under
  $PGDATA/base).  Possibly you can do this just from directory/file sizes.
  Note that template1 should be OID 1, and template0 will have the next
  lowest number (probably 16555, in 7.2).
  
  4. Initdb a scratch database in some other place (or move aside your
  existing files, if that seems safer).  In this scratch DB, create
  databases, users, and groups to match your old setup.  You should be
  able to duplicate everything except the database OIDs using standard
  SQL commands.
  
  5. Shut down scratch postmaster, then hex-edit pg_database to insert the
  correct OIDs.  Use pg_filedump or a similar tool to verify that you did
  this properly.
  
  6. Restart scratch postmaster, and VACUUM FREEZE pg_database, pg_shadow,
  and pg_group (from any database).  This will make the next step safe.
  
  7. Stop scratch postmaster, and then copy over its $PGDATA/global
  directory into the old DB.
  
  8. Cross your fingers and start postmaster ...
  
  This will probably *not* work if you had been doing anything to
  pg_database, pg_shadow, or pg_group between your last checkpoint and the
  crash, because the reconstructed tables are not going to be physically
  identical to what they were before, so any actions replayed from WAL
  against those tables will be wrong.  Hopefully you won't have that
  problem.  If you do, it might work to shut down the postmaster and again
  copy the scratch $PGDATA/global directory into the old DB, thereby
  overwriting what the WAL replay did.  This is getting into the realm of
  speculation though.
  
   regards, tom lane
  
  ---(end of broadcast)---
  TIP 2: you can get off all lists at once with the unregister command
  (send unregister YourEmailAddressHere to [EMAIL PROTECTED])



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] missing data/global

2004-08-23 Thread Daniel Kalchev
Hello,

Is there ANY chance to recover data from a database system that suffered disk 
crash, and is not missing the data/global directory?

Version is 7.2.4. Database files seem to be intact as well as pg_clog and 
pg_xlog directories.

Thanks in advance for any ideas.

Daniel


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] missing data/global

2004-08-23 Thread Daniel Kalchev
Tom,

This is basically what I had in mind, but you described it better than I ever 
could :)

What I need from this database system is just one database and probably not 
all of the tables anyway (but some do seem valuable). This database happens to 
be second in the pg_dumpall script. The next databases are rather big (and I 
actually have more recent backup and could eventually recreate the data) The 
valuable database hasn't had significant structure changes since the backup.

Looking at the files, I am confident which is the proper database oid - if 
this cannot be properly fixed, is there .. reasonable way to dump data from 
the (heap) files?

Here is what I have:

su-2.02# du
1747./base/1
1693./base/16555
1   ./base/77573557/pgsql_tmp
127036  ./base/77573557
1   ./base/13255137/pgsql_tmp
1379190 ./base/13255137
11246   ./base/95521309
1781./base/96388007
1   ./base/133512058/pgsql_tmp
11933861./base/133512058
13456555./base
98209   ./pg_xlog
41315   ./pg_clog
13596100.

My database should be with oid 77573557, template0 is apparently 16555

Let's see how all this works.

Daniel

Tom Lane said:
  Daniel Kalchev [EMAIL PROTECTED] writes:
   Is there ANY chance to recover data from a database system that suffered d
 isk
   crash, and is not missing the data/global directory?
   Version is 7.2.4. Database files seem to be intact as well as pg_clog and 
   pg_xlog directories.
  
  The hard part I think would be reconstructing pg_database, because you'd
  need to get the database OIDs right.  I can't think of any way to do
  that that doesn't involve poking at the file with a hex editor.
  
  Here's a sketch of how I'd proceed:
  
  1. Make a tar backup of what you have!  That way you can start over
  after you screw up ;-)
  
  2. I assume you know the names and properties of your databases, users,
  and groups if any; also the SYSID numbers for the users and groups.
  A recent pg_dumpall script would be a good place to get this info.
  
  3. You're also going to need to figure out the OIDs of your databases
  (the OIDs are the same as the names of their subdirectories under
  $PGDATA/base).  Possibly you can do this just from directory/file sizes.
  Note that template1 should be OID 1, and template0 will have the next
  lowest number (probably 16555, in 7.2).
  
  4. Initdb a scratch database in some other place (or move aside your
  existing files, if that seems safer).  In this scratch DB, create
  databases, users, and groups to match your old setup.  You should be
  able to duplicate everything except the database OIDs using standard
  SQL commands.
  
  5. Shut down scratch postmaster, then hex-edit pg_database to insert the
  correct OIDs.  Use pg_filedump or a similar tool to verify that you did
  this properly.
  
  6. Restart scratch postmaster, and VACUUM FREEZE pg_database, pg_shadow,
  and pg_group (from any database).  This will make the next step safe.
  
  7. Stop scratch postmaster, and then copy over its $PGDATA/global
  directory into the old DB.
  
  8. Cross your fingers and start postmaster ...
  
  This will probably *not* work if you had been doing anything to
  pg_database, pg_shadow, or pg_group between your last checkpoint and the
  crash, because the reconstructed tables are not going to be physically
  identical to what they were before, so any actions replayed from WAL
  against those tables will be wrong.  Hopefully you won't have that
  problem.  If you do, it might work to shut down the postmaster and again
  copy the scratch $PGDATA/global directory into the old DB, thereby
  overwriting what the WAL replay did.  This is getting into the realm of
  speculation though.
  
   regards, tom lane



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] protocol compatibility between 7.2 and 7.4

2003-11-26 Thread Daniel Kalchev
I know this is an attempt to save myself reading the mailing list, but still 
the issue remains:

the psql from version 7.4 does not talk to a 7.2.4 database.

The CHANGELOG indicates, that both server and libraries keep compatibility 
with versions after 6.3 - still there is no switch in psql to specify usage of 
the pre-7.3 protocol and the 7.2 server and 7.4 psql apparently do not 
negotiate.

Daniel


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Questions about indexes?

2003-02-17 Thread Daniel Kalchev
Ryan Bradetich said:
  the table would look like:
  1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.
  1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.
  1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has expired password.
  2 | Mon Feb 17 00:34:24 MST 2003 | f101 | file /foo has improper owner.
  etc...
  
  So I do not need the anomaly to be part of the index, I only need it to 
  
  I agree with you, that I would not normally add the anomally to the
  index, except for the unique row requirement.  Thinking about it now,
  maybe I should guarentee unique rows via a check constraint...
  
  Thanks for making me think about this in a different way!

(sorry this is a bit long)

Ryan,

I use somewhat similarly structured data (archived records of various events) 
and when the database was setup (back when this baby was called postgres95), I 
too used indexes on all possible fields.

My database consists of an 'operations' table, which holds for the last x days 
period (example) and several tables with archived records (per month, or 
per-year - see later, The operations table can have frequent updates, which 
add new data. Data is never modified but often lookups are made. The archived 
tables are generated once and forever from the operations table (possibly 
merging in the future, but I haven't yet made my mind on this) - then access 
is read-only, although sufficiently frequent.

What I found for the many years of operating this database on different 
PostgreSQL versions and hardware is that indexes have considerable cost. :)
So does the need to not miss anything from the operations table (that is, 
collect data from many places and have have it all it there).

I ended up with few only indexes on the operations table, because the 
processes that fill it up do minimal lookups to see if data is already in the 
table, if not do inserts. Then at regular intervals, the table is cleaned up - 
that is, a process to remove the duplicate is run. This unfortunately costs 
OIDs, but I found no other reasonable way to do the fast inserts. Perhaps the 
best way is to create the table without OIDs (but wouldn't this still waste 
OIDs?) use COPY and then clean afterwards?

The archived tables are generated, then cleaned up. Then, as Tom suggested 
indexes are put on the archived tables, only for the fields that are used in 
queries. Once the table is created, there is no way duplicated data will 
exist, as it will not be inserted into. Therefore no need for UNIQUE index 
enforcement.

If you need to have one large 'history' table, then perhaps you will just have 
to do (slow :) selects for each record before each insert, or just insert the 
data and then run the cleanup process.

Daniel


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [pgsql-advocacy] [HACKERS] Changing the default configuration

2003-02-14 Thread Daniel Kalchev
Jason Hihn said:
  Pardon my ignorance, but there's no way to auto-tune? Ship it with a thread
  that gathers statistics and periodically re-tunes the database parameters.
  Of course, be able to turn it off. People that actually take the time to run
  tune manually will turn it off as to not have the overhead or interruption.
  Those that don't care about pg_tune shouldn't care about having a thread
  around retuning. Those that will care will tune manually.

This is related to my proposition, but trouble is, there is not such thing as 
'well tuned database' that will suit all queries. You can tune the database to 
the hardware for example (still remember that old argument on random access 
and fast disks).

It seems the system could 'self-tune' itself on minor choices. I believe it 
does this today for a number of things already. More significant changes 
require the DBA consent and choice - but they need to be well informed of the 
current usage statistics when making the choice.

Daniel


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [pgsql-advocacy] [HACKERS] Changing the default configuration

2003-02-14 Thread Daniel Kalchev
Josh Berkus said:
  How about we take this discussion to the Performance List, where it belongs?

I believe the design and addition of code that collects and outputs the usage patterns 
of the database (statistics) belongs here.

If we take the approach to providing information to tune PostgreSQL based on 
real-world usage, I guess we need at least the following:

- Usage statistics on a per-database or even per-table level. I believe we already 
collect some;
- Statistics analysis tool/routine/process to produce suggestions;
- Of course lots of real-world data to justify the suggestions;

- Can we provide more knobs for tunable parameters that can be applied on a 
per-database or even per-table level. One first candidate might be the FSM?
- Can some of these parameters (when available) to auto-tune? 

Of course, this could move out of pgsql-hackers :)

Daniel


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Offering tuned config files

2003-02-14 Thread Daniel Kalchev
Manfred Koizar said:
  effective_cache_size = 2 (~ 160 MB) should be more adequate for a
  256 MB machine than the extremely conservative default of 1000.  I
  admit that the effect of this change is hard to benchmark.  A way too
  low (or too high) setting may lead the planner to wrong conclusions.

The default on BSD systems is 10% of the total RAM, so on a 256MB machine this 
would be ~26MB or effective_cache_size = 32000.

One could always modify the kernel to support much larger value, but I doubt 
this is done in many cases and the usefulness of larger buffer cache is not 
obvious in the presence of many fsync calls (which might be typicall). I could 
be wrong, of course :)

In any case, the default is indeed low and would prevent using indexes on 
larger tables, where they are most useful.

Daniel


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Brain dump: btree collapsing

2003-02-13 Thread Daniel Kalchev
Tom,

Sound excellent. Index growth has been something that always bothered me (not 
the disk space usage, but the slow searches).

I believe it's best to have pages marked dead at the time the last key 
contained in the page is deleted (you didn't discuss how efficient this is), 
because this will somehow improve the three depth. The same functionality 
should be available in VACUUM (just in case). Thus we should 'free' the index 
pages with one VACUUM run, instead of two.

In the spirit of my ramblings about automatic statistics/suggestions by 
PostgreSQL for optimizations, could you also implement a NOTICE when the index 
becomes too 'thin'? I believe this will help avoid severe performance 
degradation if the process of removing the dead tuples becomes automatic.

It also occurs to me, that if such statistics are available, PostgreSQL might 
run VACUUM automatically, on specific tables/indexes - all this controlled by 
a CUG variable.

Daniel


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Brain dump: btree collapsing

2003-02-13 Thread Daniel Kalchev
Justin Clift said:
  snip
   In theory, if we find recyclable page(s) at the physical end of the index,
   we could truncate the file (ie, give the space back to the filesystem)
   instead of reporting these pages to FSM.  I am not sure if this is worth
   doing --- in most cases it's likely that little space can be released this
   way, and there may be some tricky locking issues.
  
  Sounds like this would be beneficial for environments with high 
  update/delete transaction volumes, perhaps on smaller amounts of 
  live/valid data.

But if dead pages are removed (returned to FSM?) as soon as last item is 
removed from the page, the page usage will remain small. Or perhaps not?

That is, it's unlikely to collect large number of dead/free pages at the end 
of the physical storage except if doing all this in single VACUUM session.

Daniel


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Brain dump: btree collapsing

2003-02-13 Thread Daniel Kalchev
Bruce Momjian said:
  
  This brings up one item it would be nice to address at the same time. 
  It would be nice if VACUUM FULL would be able to compress the actual
  index file and return unused space to the operating system.  REINDEX
  does this, but I was thinking of something a little lighter that could
  be done automatically as part of VACUUM FULL.  If we can do that, it
  would make consistent behavior for vacuum and heap/index files.

Since lazy VACUUM exists, I always wondered why VACUUM FULL doesn't run 
REINDEX on a table where significant number of deleted tuples. VACUUM knows 
those numbers - I always run REINDEX on larger tables that had huge number of 
index entries deleted during VACUUM...

Daniel


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Changing the default configuration

2003-02-13 Thread Daniel Kalchev
Bruce Momjian said:
  
  I imagined they could run pgtune anytime after install to update those
  performance parameters.  It gives them a one-stop location to at least
  do minimal tuning, and as their load changes, they can run it again.

True, but to make reasonably good choice, they will need to base the parameter 
selection on some statistics data. Not many people do trough testing before 
tuning their system and in many cases, the tests one do rarely resemble the 
real-world usage of their database(s).

I agree that pgtune would be wonderful tool in this case, but they first need 
to get some idea what parameters should be given to it.

This process if further complicated by the fact that we can tune PostgreSQL on 
a per-installation basis, instead of on per-database basis - many of the 
parameters, for example FSM and sort memory are database related. We usually 
split data into databases to put related data together or data with similar 
usage pattern etc.

Daniel


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Changing the default configuration (was Re:

2003-02-12 Thread Daniel Kalchev
scott.marlowe said:
  On 11 Feb 2003, Greg Copeland wrote:
   Besides, I'm not sure that it makes sense to let other product needs
   dictate the default configurations for this one.  It would be one thing
   if the vast majority of people only used PostgreSQL with Apache.  I know
   I'm using it in environments in which no way relate to the web.  I'm
   thinking I'm not alone.
[...]
  You don't have to be using apache to need more than 32 simo connections.  
  Heck, how many postgresql databases do you figure are in production with 
 that setting still in there?  My guess is not many.
  

I would second this. One of my larger PostgreSQL applications uses Apache, 
although it's not typical web server. Apache is restricted to particular 
number of processes and it rarely uses too many backends (but one should 
consider the possible N databases x M apache processes when using persistent 
database connections).

The main connection load on that system however comes from lots of scripts 
that run asynchronously and access the same data (collect, analyze, archive 
data). Even if database access is serialized as much as possible (at the cost 
of performance or wall clock time waste), this load represents sometimes 
hunderts of backends.

My opinion too is that increasing the number of connections will benefit more 
the first-time experience in usability, rather than in performance boost.

The main trouble is, that more connections require not only more semaphores, 
but also more shared memory.

If we are toying with the 'performance' idea, we should definitely increase 
the sort memory default as well :-) ... and this means LOTS of memory for many 
processes.

Is it possible to have some useful connections/memory usage statistics - run 
this code on different installations and collect sufficient data to make 
better choice.

Daniel


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PostgreSQL Windows port strategy

2003-02-12 Thread Daniel Kalchev
Vatamanescu Victor said:
  I don't really much care what's the OS our product is running on. I care muc
 h about our product's high availability, speed, scalability etc. In the la
 st month I saw on this list a lot of opinions regarding the differences be
 tween various operating systems. I havent saw opinions regarding the funct
 ionalities exposed by our direct competitors: other dmbs. Do we want to tr
 ansform PostgreSQL in the next generation's OS and I am not aware?

PostgreSQL is what it is, which certainly is not an OS.

If PostgreSQL was 'Operating System' we wouldn't care much on what 'Operating 
System' it runs, right?

But most of the things related to performance AND functionality of PostgreSQL 
depend very much on what OS you use to build and run it on. (otherwise it 
could well contain portions of the OS much like Oracle does :)

While I agree, that (any) Windows platform may be useful for a lot of things, 
it's true that one should be wary of Windows, for things like software 
development (due to it's unstable API), platform stability (I have yet to know 
someone who didn't have to reinstall Windows every month or so), performance 
(you can hardly explain to customers, why their new personal Windows 
Workstation requires  1 GHz processor,  256 MB RAM etc while showing them 
how a moderate PostgreSQL database server that could serve their entire 
company can run on much older/slower/cheaper hardware).

There is nothing wrong in having nice Windows based GUI for accessing 
databases, but turning a graphical workstation into database server is 
something that has never worked in the computer history (the opposite has 
happened from time to time).

Windows has lost the game when Microsoft decided to abandon support for 
non-Intel CPUs in Windows NT (VMS actually).

  I can guarantee you good software can be written on Windows too. I can guara
 ntee you that moving PostgreSQL on Windows is the best move PostgreSQL. I 
 can guarantee you a MMC Snap-In for PostgreSQL and a Visual Studio .NET ad
 d-in for PostgreSQL can help PostgreSQL becomming a leader.

While enough was said in this list regarding the Windows port - nothing could 
stop anyone to port an open-source database to whatever platform, including 
Windows. The better the platform knowledge of the porting team, the better the 
result. But 'moving' PostgreSQL to Windows won't happen.

  But discussions like Windows is a B category platform and Windows will di
 e tommorow and Linux is the best won't lead us on a top position.

My personal favorite is BSD/OS and for what it matters, Windows cannot solve the kind 
of tasks, that BSD/OS solves for me. period.

Daniel

PS: I probably sound too anti-Microsoft biased, which is not true. I just don't care 
about Microsoft - I use Windows from time to time when someone sends me 
$@^#$@*#*$-Microsoft-Office formatted document that will not open anywhere else. 
Windows also makes great launcher for my father's Heroes III game. ;-)


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] PostgreSQL Windows port strategy

2003-02-12 Thread Daniel Kalchev
Vatamanescu Victor said:
  Well, I havent seen much that unstable API. If you saw something unstable pl
 ease provide me source code that proves Windows API is unstable. Don't tel
 l me about some expert's oppinion: if you have a problem with Windows sh
 ow it to me. We are not users here.

Really? We ARE users of Windows here. Nobody has ever bothered to tell US how 
this thing works, why it works and what can we, users do when it doesn't work.

I am talking about the Windows API here, with it's numerous variants and 
'Service Packs'. I have learned my lesson on Windows: use only default 
configurations, anything else will bite you badly some day.

(this is not to say that you can't get the same effect on other platforms, 
just that on most 'commercial' UNIX platforms some things are very detailed 
documented - sometimes too much in detail)

 I agree that Windows 95/98 was a sh...
 , but I have news for you: Windows is becomming stronger every day. In 199
 8-1999 we could talk about the dying Microsoft - they were null on the ser
 ver platform and the growth of Linux seemed unlimited. Can you tell me tha
 t the situation is the same? Have you carrefully tested Windows 2003?

You are wrong about Microsoft here. They were null on the server platform, say 
at 1985, when their wonderful product Microsoft BASIC was my favorite on the 
Apple ][. Then IBM gave them the IBM PC to write an 'OS' for it.

I don't claim to be an expert on Windows, although I have actually 'tested' 
(both from the user's perspective and usability as 'platform') every Windows 
release since 1.03.

  About reinstalling: at home I have a Pentium III 1 ghz workstation. OS: Wind
 ows XP. I have installed on it Visual Studio .NET, Visual Studio 6, Micros
 oft SQL Server, IBM DB2, Cygwin, IIS, MSMQ etc etc(you got the picture). I
  installed it about 9 months ago (when I bought the hdd) and I dont have a
 ny trouble with it, I hadn't to reinstall it after a  month...

But rebooted it how many times?

I have never ever reinstalled a UNIX system. Some run for years (yes, that is 
some N x 365 days) without being reboot or powered down. They usually stop 
when that same flaky PC hardware breaks, and I have to build new system then 
anyway.

  By the way, DB2 has no problem working on Windows. I havent heard IBM callin
 g Windows a B category platform, on the contrary...

I was trying to avoid this part of the discussion... mostly because I don't 
believe PostgreSQL is yet able to compete with the 'big' database systems. 
Here I mean BIG database systems. One of the reasons in my opinion is that 
those guys that run their data on big iron just can't explain why they will 
spend millions of dollars a year on OS licenses and still want to install an 
open-source database there.

It seems you got my opinion wrong: I don't see anything wrong with running 
PostgreSQL on Windows. In fact, I have been considering such port some time 
ago (or alternatively a 'DOS' port :), because this allows greater flexibility 
in certain cases. However, moving main development effort on Windows, just to 
satisfy the Windows lovers' ego would be catastrophic and simply will not 
happen to PostgreSQL. One of the reasons this will not happen is because 
PostgreSQL has already grown, has become quality software and many people 
already know how it runs on the SAME Intel hardware on which Windows runs.

The difference with DB2 that runs on IBM other platforms is significant for 
the same reason. When it fails at your data set on the Windows platform, IBM 
can always tell you but,... this is because of the cheap and flaky PC 
hardware - here is what this wonderful software is designed to run on..




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Changing the default configuration

2003-02-12 Thread Daniel Kalchev
Bruce Momjian said:
[...]
  For example, we can ask them how many rows and tables they will be
  changing, on average, between VACUUM runs.  That will allow us set the
  FSM params.  We can ask them about using 25% of their RAM for shared
  buffers.  If they have other major apps running on the server or have
  small tables, we can make no changes.  We can basically ask them
  questions and use that info to set values.

Bruce, this is an very good idea and such tool would simplify setup for the 
me-too type of DBA - we should definitely try to attract them.

However, how could one possibly answer the above question, if they setup their 
database for the first time?

What is more, these settings are on a per-installation, not per-database - 
which means, that if you have several small, but active databases and one 
large database the requirements will be very different.

Nobody likes answering such questions when installing new software. You might 
enjoy it the first few times, but then learn the 'answers' and don't even 
think what the question is. (we all know the answer :)

Perhaps indeed a better idea is to have PostgreSQL itself collect usage 
statistics, and from time to time print 'suggestions' to the log file (best in 
my opinion), or have these available via some query. These suggestions should 
best reflect the of course require minimal intervention to the database 
system, such as restart etc.


Daniel



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Terrible performance on wide selects

2003-01-23 Thread Daniel Kalchev
Hannu Krosing said:
  Tom Lane kirjutas N, 23.01.2003 kell 02:04:
   We already do cache column offsets when they are fixed.  The code that's
   the problem executes when there's a variable-width column in the table
   --- which means that all columns to its right are not at fixed offsets,
   and have to be scanned for separately in each tuple, AFAICS.
  
  Not only varlen columns, but also NULL columns forbid knowing the
  offsets beforehand.

Does this mean, that constructing tables where fixed length fields are 
'before' variable lenght fields and 'possibly null' fields might increase 
performance?

Daniel


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] v7.3.1 psql against a v7.2.x database ...

2003-01-22 Thread Daniel Kalchev
[EMAIL PROTECTED] said:
  I'd support making psql 7.3 and forward be aware of the backend they 
  are connecting to, and support them being able to work against all 7.3+ 
  servers, but I still fail to see the pressing need for a backward-compatible
  
  version when the correct one is always shipped with the server. 

This hits the nail in the head. I was just counting the pros' and cons to 
upgrade an large production system from 7.2.3 to 7.3.x.

The trouble is, there is need to access both types of databases, new and old. 
Often from the same psql executable.

So psql should definitely be backward compatible!

Daniel


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Options for growth

2003-01-20 Thread Daniel Kalchev
D'Arcy J.M. Cain said:
  On Thursday 16 January 2003 11:59, Adrian 'Dagurashibanipal' von Bidder wrot
 e:
   On Thu, 2003-01-16 at 17:42, D'Arcy J.M. Cain wrote:
We are also looking at hardware solutions, multi-CPU PCs with tons (24GB
 )
of memory.  I know that memory will improve access if it prevents
swapping but how well does PostgreSQL utilize multiple CPUs?
  
   At most one CPU is used for any single postgres backend (that means for
   any single database connection). So, if your load problem is single
   queries being too slow, thee's nothing you can do with adding more CPUs.
   If your problem is many connections maxing out the db, PostgreSQL can
   take full advantage of multiple CPUs.
  
  I most definitely have multiple queries running at once.  My main issue is 
  whether PostgreSQL scales up properly or does it get bogged down with too 
  many locked queries.

That would depend on the OS. Not many 'pc-based unix' support over 4 GB of 
memory, some don't even go that far.

If memory is an issue, have you considered going to 64bit CPU?

Memory is indeed an issue for a complex database setup, especially if you want 
to give the backends enough shared and sort memory.

As already said, PostgreSQL will utilize multiple CPUs - as effectively as 
your OS can do this of course. PostgreSQL is not an OS by itself and does not 
really control these resources.

I have also found it very helpful to split database from application servers 
(wish I do it as often as I recommend it :) - thus you can optimize the part 
that needs most resources.. In many cases the requirements are quite 
different. With todays gigabit LANs, bandwidth between machines shouldn't be 
an issue.

By the way, I too wonder which supported OS platform would support over 4GB of 
memory on a PC..

Daniel


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] default to WITHOUT OIDS?

2003-01-13 Thread Daniel Kalchev
Neil Conway said:
  On Fri, 2003-01-10 at 21:27, Christopher Kings-Lynne wrote:
   So what actually is the point of OIDs then?
  
  My understanding is that they're used to uniquely identify entries in
  system catalogs. If there's a good reason to make use of OIDs on user
  tables, I can't see it...

What happens if you have an existing database and want to load new tables, 
that rely on their OIDs (the OIDs of the rows actually) to refer to data in 
other tables (the 'old' way)?

Normally, one would dump the old tables 'with oids' and copy to the new 
database 'with oids'. Chances are, there will be duplicate OIDs in the 
database - in the existing and new tables

Daniel


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] default to WITHOUT OIDS?

2003-01-13 Thread Daniel Kalchev
If ever this happens, same should be considered for tables created via the 
SELECT INTO statement. These are in many cases 'temporary' in nature and do 
not need OIDs (while making much use of the OIDs counter).

Daniel


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Cast your vote ...

2003-01-02 Thread Daniel Kalchev
Registration is easy, and pretty much anonymous... worth to promote our 
beloved database. :)

Happy New Year,

Daniel

Marc G. Fournier said:
  
  Just got this in my mailbox:
  
  2002 LinuxQuestions.org Members Choice Awards:
  
  http://www.linuxquestions.org/questions/showthread.php?s=78a8c06fbc1dcecd525
 97decd6c56ad8threadid=39870
  
  And we are way behind MySQL (like, d'ah!):
  
  http://www.linuxquestions.org/questions/poll.php?s=78a8c06fbc1dcecd52597decd
 6c56ad8action=showresultspollid=168
  
  The problem, of course, is that you have to be a registered member to
  vote, so its not an 'open poll' ...
  
  
  ---(end of broadcast)---
  TIP 5: Have you checked our extensive FAQ?
  
  http://www.postgresql.org/users-lounge/docs/faq.html



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [ADMIN] dumpProcLangs(): handler procedure for language

2002-12-09 Thread Daniel Kalchev
I have had similar troubles, related to oid overflow. I had to modify pg_dump 
to properly cast queries that contain oids. This is against 7.1.3 source. The 
patch was hacked quickly, in order to get a corrupted database reloaded, and 
this while I was traveling in another country... so it is far from perfect but 
saved my database(s). It also fixes other oid-related troubles of pg_dump.

See attached file.

Daniel

Brian Fujito said:
  Thanks for your input--
  
  I've tried both ways:
  
  createlang/droplang from the command line as user postgres
  
  and:
  
  CREATE FUNCTION plpgsql_call_handler () RETURNS OPAQUE AS
  '/usr/lib/pgsql/plpgsql.so' LANGUAGE 'C';
  
  CREATE TRUSTED PROCEDURAL LANGUAGE 'plpgsql'
  HANDLER plpgsql_call_handler
LANCOMPILER 'PL/pgSQL';
 
  I'm using pg_dump (not pg_dumpall) on the specific database on which
  I created the language.
  
  I realize 7.0.3 is ancient (same with 7.1)... I just don't have the
  time to deal with an upgrade quite yet.  Soon enough :)  In the mean
  time, a stop-gap solution would definitely be appreciated.
  
  Thank you,
  Brian
  
  
  On Mon, 2002-12-09 at 14:28, Tom Lane wrote:
   Brian Fujito [EMAIL PROTECTED] writes:
I'm running postgres-7.0.3 on RH7.1 using RedHat RPMs.
I recently added plpgsql as a language to one of my databases, 
and now when I try to do a pg_dump, I get:
   
dumpProcLangs(): handler procedure for language plpgsql not found
   
If I drop the language, pg_dump works fine, but if I add it back (and
even if I restart postgres), I get the same error.
   
   What exactly are you doing to drop and re-add the language?  I should
   think CREATE LANGUAGE would fail if the handler proc isn't there.
   
   (Also, are you doing pg_dump or pg_dumpall?  If the latter, maybe the
   failure is occurring in a different database than the one you are
   changing.)
   
   But having said that, 7.0.3 is ancient history ... you really are
   overdue for an upgrade.  With my Red Hat fedora on, I'd say the same
   about your choice of OS version too.
   
  regards, tom lane
  
  
  
  ---(end of broadcast)---
  TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


*** pg_dump.c.orig  Mon Apr 15 09:45:58 2002
--- pg_dump.c   Tue Jun 25 00:23:53 2002
***
*** 2006,2012 
finfo[i].prorettype = strdup(PQgetvalue(res, i, i_prorettype));
finfo[i].retset = (strcmp(PQgetvalue(res, i, i_proretset), t) == 0);
finfo[i].nargs = atoi(PQgetvalue(res, i, i_pronargs));
!   finfo[i].lang = atoi(PQgetvalue(res, i, i_prolang));
finfo[i].usename = strdup(PQgetvalue(res, i, i_usename));
finfo[i].iscachable = (strcmp(PQgetvalue(res, i, i_iscachable), t) 
== 0);
finfo[i].isstrict = (strcmp(PQgetvalue(res, i, i_isstrict), t) == 0);
--- 2006,2012 
finfo[i].prorettype = strdup(PQgetvalue(res, i, i_prorettype));
finfo[i].retset = (strcmp(PQgetvalue(res, i, i_proretset), t) == 0);
finfo[i].nargs = atoi(PQgetvalue(res, i, i_pronargs));
!   finfo[i].lang = atooid(PQgetvalue(res, i, i_prolang));
finfo[i].usename = strdup(PQgetvalue(res, i, i_usename));
finfo[i].iscachable = (strcmp(PQgetvalue(res, i, i_iscachable), t) 
== 0);
finfo[i].isstrict = (strcmp(PQgetvalue(res, i, i_isstrict), t) == 0);
***
*** 2289,2295 
  
resetPQExpBuffer(query);
appendPQExpBuffer(query,
! SELECT Oid FROM pg_index i 
WHERE i.indisprimary AND i.indrelid = %s ,
  tblinfo[i].oid);
res2 = PQexec(g_conn, query-data);
if (!res2 || PQresultStatus(res2) != PGRES_TUPLES_OK)
--- 2289,2295 
  
resetPQExpBuffer(query);
appendPQExpBuffer(query,
! SELECT Oid FROM pg_index i 
WHERE i.indisprimary AND i.indrelid = oid(%s) ,
  tblinfo[i].oid);
res2 = PQexec(g_conn, query-data);
if (!res2 || PQresultStatus(res2) != PGRES_TUPLES_OK)
***
*** 2328,2340 
appendPQExpBuffer(query,
SELECT c.relname 
  FROM pg_index i, 
pg_class c 
! WHERE i.indrelid = 
%s
  

Re: [HACKERS] DROP USER weirdness in 7.2.1

2002-10-16 Thread Daniel Kalchev

Peter Eisentraut said:
  Daniel Kalchev writes:
  
   One would normally expect, that when DROP USER someuser is issued, all
   associated data structures will be readjusted, especially ownership and ac
 cess
   rights.
  
  Perhaps, but the documentation states otherwise.
  
[...]
   Any resolution for this?
  
  Recreate the user with the given ID and drop the objects manually.

I was able to modify ownership of all tables using ALTER TABLE. However, the 
associated pg_toast tables still remain with the wrong ownership.

In my case, I had to recreate the user, because it had to have rights in a 
different database within the same postgres installation... Nevertheless, it 
would be much more convenient, if ALL rights associated with the particular 
users are dropped when the user is dropped and eventually all orphaned objects 
have their owner set to the DBA (postgres).

It is not too difficult to imagine, that in real-world database installation 
users would need to be created and dropped.

Daniel


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] DROP USER weirdness in 7.2.1

2002-10-15 Thread Daniel Kalchev

I have encountered unexpected behavior of DROP USER in 7.2.1.

One would normally expect, that when DROP USER someuser is issued, all 
associated data structures will be readjusted, especially ownership and access 
rights.

This however does not happen.

After droping an user, that had ownership of tables, pg_dump complains :

[...]
pg_dump: WARNING: owner of data type table_one appears to be invalid
pg_dump: WARNING: owner of table some_seq appears to be invalid
[...]

The access rights to those tables remain

database=# \z table_one
 Access privileges for database customer
Table|  Access privileges   
-+--
 table_one | {=,98=arwdRxt,maria=arwdRxt}
(1 row)

There is no way to remove rights of this 'user' 98 using REVOKE etc.

Perhaps full dump/reload will remove the rights, because that user will not be 
found, but restore may fail due to the error conditions.

Any resolution for this?

Regards,
Daniel


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] oids rollover?

2002-06-24 Thread Daniel Kalchev

o
I have a problem with an 7.1.3 database that has probably overflowed
the oid counter. The startup halts with these messages

DEBUG:  database system was interrupted at 2002-06-24 21:19:43 EEST
DEBUG:  CheckPoint record at (156, 1692817164)
DEBUG:  Redo record at (156, 1692775580); Undo record at (0, 0);
Shutdown FALSE
DEBUG:  NextTransactionId: 859255800; NextOid: 7098
FATAL 2:  Invalid NextTransactionId/NextOid
postmaster: Startup proc 4752 exited with status 512 - abort


Can something be sone to recover the database?

Regards,
Daniel



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





[HACKERS] tuples gone?

2002-05-23 Thread Daniel Kalchev

Hi,

I got an corrupted table,,, unfortunately with pretty important data :(

VACUUM tells me:

NOTICE:  Rel relx: TID 2344/5704: OID IS INVALID. TUPGONE 1.
NOTICE:  Rel relx: TID 2344/5736: OID IS INVALID. TUPGONE 1.
NOTICE:  Rel relx: TID 2344/5768: OID IS INVALID. TUPGONE 1.

(this, many times, then)
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.

I can read part (beginning?) of the relation with select or copy, but anything 
that touches this area dies badly :(

Is there any way to recover this relation? Or at least as much data as 
possible?

Oh, an this is 7.1.3 and I am probably running with too large oids :)

DEBUG:  NextTransactionId: 708172974; NextOid: 3480073772

Daniel


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] more on large oids

2002-04-15 Thread Daniel Kalchev

I have hit another very annouing problem with the oids being larger than 
max_int. When tables are created under such circumstances, pg_dump cannot dump 
the database anymore. The error is

getTables(): SELECT (for PRIMARY KEY) failed on table config_2002_03_02.  
Explanation from backend: ERROR:  dtoi4: integer out of range

Any idea how to fix this? This is on 7.1.3. Will the 7.2 pg_dump handle this 
database?

Daniel




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] maxint reached?

2002-04-02 Thread Daniel Kalchev

Has anyone seen this:

ERROR:  dtoi4: integer out of range


on 7.1.3

What worries me, is that at startup time, the log shows:

DEBUG:  database system was shut down at 2002-04-02 23:16:52 EEST
DEBUG:  CheckPoint record at (82, 1928435208)
DEBUG:  Redo record at (82, 1928435208); Undo record at (0, 0); Shutdown TRUE
DEBUG:  NextTransactionId: 517528628; NextOid: 2148849196
DEBUG:  database system is in production state

Note the NextOid, while i /usr/include/machine/limits.h defines INT_MAX as 
2147483647. Are oid really singed ints?

Daniel

PS: This database indeed has an increasing oid counter in that range. Grep 
from the log shows

DEBUG:  NextTransactionId: 386003914; NextOid: 1551075952
DEBUG:  NextTransactionId: 397667914; NextOid: 1643984428
DEBUG:  NextTransactionId: 53748; NextOid: 1864857132
DEBUG:  NextTransactionId: 450233305; NextOid: 1888540204
DEBUG:  NextTransactionId: 454987662; NextOid: 1917687340
DEBUG:  NextTransactionId: 501775621; NextOid: 2078209580
DEBUG:  NextTransactionId: 517524499; NextOid: 2148849196
DEBUG:  NextTransactionId: 517528628; NextOid: 2148849196

this is from one month ago.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] maxint reached?

2002-04-02 Thread Daniel Kalchev

An followup to my previous post.

It turned out to be an query containing oid = somenumber called from perl script. Is 
it possible that the default type conversion functions do not work as expected?

Changing this to oid = oid(somenumber) worked as expected.

Daniel


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] maxint reached?

2002-04-02 Thread Daniel Kalchev

Tom Lane said:
  This is one of a whole raft of cases involving undesirable assignment
  of types to numeric constants; see past complaints about int4 being used
  where int2 or int8 was wanted, numeric vs float8 constants, etc etc.
  We're still looking for a promotion rule that does what you want every
  time...

So in essence this means that my best bet is to again dump/reload the 
database... Even pgaccess has hit this problem as it uses oid=something in the 
queries.

Daniel


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] pg_temp.XX.0

2002-04-02 Thread Daniel Kalchev

I found out, that there are some probably temporary relations in one of my 
databases, with names (that show in vacuum verbose output) like 
pg_temp.12720.0.

Are these the result of CREATE TEMP TABLE or simmilar and if so, can such 
relations be safely dropped? Perhaps a good idea to add some vacuum 
functionality to do this.

Daniel


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] maxint reached?

2002-04-02 Thread Daniel Kalchev

Tom Lane said:
  Daniel Kalchev [EMAIL PROTECTED] writes:
   So in essence this means that my best bet is to again dump/reload the 
   database...
  
  Either that or fix your queries to cast the literals explicitly.

There is more to it:

customer=# select max(oid) from croute;
 max 
-
 -2144025472
(1 row)

How to handle this?

Daniel


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: OID wraparound (was Re: [HACKERS] pg_depend)

2001-07-19 Thread Daniel Kalchev

Bruce Momjian said:
[...]
   No, we won't, because OID wrap is an issue already for any long-uptime
   installation.  (64-bit XIDs are not a real practical answer either,
   btw.)
  
  Have we had a wraparound yet?

Just for the record, I had an OID overflow on production database (most middleware 
crashed mysteriously but no severe data loss) about a month ago. This was on 7.0.2 
which probably had some bug ... preventing real wrap to happen. No new allocations 
(INSERTs that used autoincrementing sequences) were possible in most tables.

Anyway, I had to dump/restore the database - several hours downtime. The database is 
not very big in size (around 10 GB in the data directory), but contains many objects 
(logs) and many objects are inserted/deleted from the database - in my opinion at not 
very high rate. Many tables are also created/dropped during processing.

What is worrying is that this database lived about half a year only...

In my opinion, making OIDs optional would help things very much. In my case, I don't 
need OIDs for log databases. Perhaps it would additionally help if OIDs are separately 
increasing for each database - not single counter for the entire PostgreSQL 
installation.

Regards,
Daniel


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]