Sort of related is how should the log file be written and synced.  Some
options are:

1) use synchronous log writes, but don't write the log at commit - wait
   for some other event - probably buffer full or timed.  No derby code
   exists for a timed approach currently.
2) use async log writes for #1 but #2 needs a sync in order to know
   what part of the log has made it to disk.

As you point out, if we force syncs for #2 then hot caches which cause
page writes will obviously be slower, if a user thread ends up waiting
synchronously for a log write to be able to get a page.  The good news
is that once one log sync has happened then all pages changed previous
to that log sync now need not wait before being written out.

I now think that 1st implementation of relaxed durability should guarantee a consistent database, but may not guarantee committed
transactions. I think suresh has convinced me that this is not
to hard to implement, and may provide a significant performance boost to applications willing to live with committed transactions disappearing.


Every transaction will either be fully committed or
fully aborted.  Before implementing a relaxed durability which could
allow for inconsistent db I would like to see some application
requests before committing such a change.
For anyone who really wants such an implementation I can provide
pretty simple instructions to build their own version (I think it
can be had by commenting out 3-6 lines of code).

Suresh Thalamati wrote:

Database should boot fine with option #1(no sync on commit) and #3(no sync on page allocation when file is grown) .
As u mentioned below page allocation and out of disk space scenarios have to be handled correctly


Another concern I had was with just option #1 and #3 how the performance is going to in relaxed durability mode when the
page cache becomes really hot? I guess it would be ok, page cleaner will sync the log using the background thread
when cache becomes full.


Thanks
-suresh


Mike Matrigali wrote:

I agree that checkpoints should happen, I was suggesting just not sync
them, but I am not even that strong about that.  Checkpoints happen
in background so do not affect performance as much as inline syncs like
page allocation and log commit.  One can already control how often
checkpoints happen with existing properties.  Note that currently this
I/O first async writes all the data for each of the files in the cache
and then at the very end calls sync before completing the operation.

Do you think if an option is provided to stop #1 and #3, does that mean
system will gurantee a consistent bootable database - but just that some
transactions thought previously committed may not be there after failure
recovery?  If so that seems like a good first level of relaxed
durability.
I have a feeling there is a very small allocation hole that
could be fixed - need to simulate a crash with an allocation and an
insert into a table where redo finds the pages unallocated and runs out
of disk space attempting to do the allocate (easy to simulate in
debugger - hard to write reproducible test case).  The system already
pretty flexibly handles encountering file sizes that don't match the
bit map entries dynamically so it may already handle it, or it may not
be hard to increase the dynamic handling.

Also some code will have to be added as I don't think the system tracks
the difference between #1 and #2.


Suresh Thalamati wrote:


I believe providing  mode which can cause non-bootable database to gain
performace  is a bad  idea.
Most of the users like me does not read manuals in detail  and will find
hard way that the database is hosed :-)
Say for example  I want to use Derby to store system monitoring
information with relaxed durabilty mode.
It   would be ok to lose some data , but it will not be acceptable if
the database can not be booted.

I think durabilty option#2 should not be available, the transaction
log should be synced when a data page is forced to disk to
avoid non-recoverable datbases for sure. These kind of log syncs can be reduced by just making the page cache bigger,
and also there should be checkpoint atleast once in a while may be for a
100MB(just a randon number)
or on a boot otherwise the disk space used by the transaction log will
never be released.


Thanks
-suresh

Mike Matrigali wrote:



I would also like to see the in memory implementation contributed, but
I think that is a different discussion.

From responses to Dan's original post on building a system with the
sync options disabled it seemed like there was enough response that
those options should be made available.  I admit I am worried because
this system can no longer guarantee recoverability.  It would be
interesting to know how people would use such a configuration.
Obviously for a database that need not last longer than a connection
this option would work, and in memory would probably work better.
For databases that last longer than a connection, what risk of
database corruption is an acceptable trade off to better performance?
This is a new idea for me, as all databases I have worked up for now
have not provided a less than durable option.

I believe the current durability options being discussed are:
1) sync of the log file at each commit
2) sync of the log file before data page is forced to disk
3) sync of page allocation when file is grown
4) sync of data writes during checkpoint

The simple change is just to allow each sync to be disabled, maybe
allowing control over all 4.

An interesting discussion is if there are changes that could be made
to make it less likely to corrupt the whole database on a JVM or machine
crash when the above syncs have been disabled.


I would have to think carefully and maybe some tests need to be
written, but I believe the only reasone for #3 is to insure that
during redo
crash recovery we don't run out of space. We actually already can run
out of space for growing the log during undo crash recovery, but in that
case we just halt the boot and print that we need more space - maybe
something similar could be done for page allocation. Note that the
change is still allocating the pages it just is not syncing before
inserting onto the pages.


#1 is probably the biggest performance win, in the case of very short
update transactions.  Unfortunately the jdbc standard which derby
implements defaults to autocommit=true - so often derby intial
performance results for new users looks bad when compared to other
databases which default to not syncing at commit time.

Part of this change should be to document the possible recovery
failures which can result by not syncing.

/mikem


Suresh Thalamati wrote:



Sunitha Kambhampati (JIRA) wrote:



Add Relaxed Durability option ------------------------------

       Key: DERBY-218
       URL: http://issues.apache.org/jira/browse/DERBY-218
   Project: Derby
      Type: Improvement
Components: Store     Versions: 10.1.0.0    Environment: all
  Reporter: Sunitha Kambhampati
   Fix For: 10.1.0.0


Dan Debrunner posted a fix to allow for relaxed durability changes
in
http://article.gmane.org/gmane.comp.apache.db.derby.user/681/match=relaxed+durability




1) Need to add this option in Derby maybe as some property

2) Also from discussions on the list, Mike suggested that - that the
logging system be changed to somehow
record that the database has operated in this manner, so that if
the database goes corrupt we don't waste effort trying to figure out
what when wrong.  Probably need some way to mark the log records, the
log control file and write a message to the user error log file.






How about adding in-memory support for Derby instead of trying to confuse the users with relaxed durabilty option. In this mode , will the database be recoverable to some consistent state always with possible loss of some transaction data in case of a crash ? -suresh


















Reply via email to