This build is primarily so that people running the purge-db4o snapshots can continue to connect to opennet, because 1466 goes mandatory 2014-11-16.
It is more or less purge-db4o-snapshot-28 with a few changes since then, and is not yet feature-complete. It's available on update.sh|cmd testing. Keys: CHK@WoCa9aTwHGgENAY3cwbZC68ati5OmVMjfMgJ4A0lycs,hVqnPVEZRBgxbnQvRKpa~pVclGTN34f1fXmaFGCAqoo,AAMC--8/freenet-testing-build-1467-pre1-snapshot.jar CHK@~mGD61ZObEKNrYvwez~jkvNPFcC13zjtQ86SF9t1yFc,Hin-fiH31PXP65NaCGhIMcBuGqdAjqRFsDz5DQiwyFE,AAMC--8/freenet-testing-build-1467-pre1-snapshot.jar.sig --- Bert Massop (1): SentTimeCache: fix new Coverity defects Charles Teese (1): Fixed CryptByteBufferTest for iv length. Matthew Toseland (1637): New FEC framework, with unit tests. Add LockableRandomAccessThing Javadocs. RAFWrapper implements LockableRAFThing Use a proper object to represent the locked status. Implement LockableRandomAccessThing here too. lock() can throw Length of a RandomAccessThing is constant. Add FIXME Add PooledRandomAccessFileWrapper (uses a limited number of open fd's to implement LockableRandomAccessThing). Inner class. Add free() to RandomAccessThing. May or may not be supported. Support setting file size on creation Add copyTo(Bucket, RAF) Fix for changes to RandomAccessThingy Handle IOException's thrown by RAFWrapper constructor after we made it throw. Fix ByteArrayRandomAccessThing: RandomAccessThing's must be threadsafe Clearer name: LockableRAF.lockOpen() not lock() Check for already closed in ByteArrayRandomAccessThing. This is not really an IOException... Javadocs. Add generic tests for RandomAccessThing, use them to test ByteArrayRandomAccessThing. More soon. Add constructor to create RAF with fixed length. Test: Factory method can throw. IllegalArgumentException if negative offset. Tests for RandomAccessFileWrapper. IllegalArgumentException here too. Fix test-breaking bugs in pooled... Allow changing the size of the pool and querying how many fd's are open. Javadocs. Fix memory leak: Remove from closables list when closing! Oops. Basic tests for PooledRAF (no pooling yet, just check basic functionality). Test that pooling works - can write to two files with a fd limit of 1. Package-local status counts for tests/stats. Test locking. Lock on closables queue, not class. Thanks bertm for suggestion. MyKey -> CodecKey. Thanks Bombe. Style. Don't need to close others if already open - even if not actually locked. Add bert's test. Works fine. :) Explain. Clarify condition, thanks bertm Rewrite wait loop, clarify, may work better if several threads trying to lock the same RAF. Another neat catch from bert. Use a LinkedHashSet, as bert suggested, using his code. (Debatable point as it slows down the common case linearly but we're doing disk I/O here anyway...) Deprecate old FEC Add MemoryLimitedJobRunner. Will be used for in-RAM FEC decoding, which is significantly different (much faster/less disk I/O, more reliable) than what we are doing at the moment. API fixes Fix test for new ClientContext parameter for MemoryLimitedJobRunner Remove job from queue in the right place. Move MemoryLimitedJob to a separate file for easier importing. Add some tests. Add some async tests and FIXMEs. Make initial allocation visible. Check return value from release. Check size Test that runner goes back to 0. Check that it sticks to the limits. Clarify, indent. Trivial: Newline Refactor slightly and clarify javadocs. Delete (possibly obsolete) comment. ChecksumOutputStream. Simple enough that I'm not providing any tests. Add storedKeysLength() Add getCryptoAlgorithm() ClientMetadata serialization to/from Data*Stream. Not tested yet. Add copyTo(), used to persist bloom filters Move DataInputStream to a factory method. Storage layer for new post-db4o splitfile code, incomplete but should build, no support for e.g. cross-segment. Abstracted out a callback SplitFileFetcherCallback so can unit test separately. Separate out interface for saltKey(). Get rid of lots of client layer dependencies in the storage code. Static method to make a FetchContext Don't clear the keys, need them to write the Metadata. Explain. Bugfix: Pass the keys in here. Fix more small bugs. Write the correct number of bytes for the last block. Initial tests for splitfile fetcher segment storage. More unit tests (here if we only give the check blocks) Fix bug found with random order of finding blocks. Test finding blocks in a random order. Fix sizes Fix test. Rename BaseRandomAccessThingTest -> RandomAccessThingTestBase (fixes unit tests in Eclipse) Add a factory interface for RandomAccessThing's Add a simple factory for in-memory Use the new RAF factory in SplitFileFetcherStorage, so don't need the dirty hack with FilenameGenerator. Fix incorrect block padding in test. Test multi-segment splitfiles. Add a RAF factory to ClientContext (*dangerous* at present as in-RAM-only) Start work on SplitFileFetcherNew, partly from old branch, builds but not complete yet. Lots of work on SplitFileFetcherNew etc, i.e. integrating the new splitfile code so it can actually work. Not finished yet. Factor out translateException into SendableGet Key selection token is always a SendableRequestItem Initial implementation of SplitFileFetcherGet (no cooldown support at the moment). Try it out. Wait for the bang. Fix infinite recursion Better error message. Minimum memory allocation needs to be slightly bigger. Terminate the getter. Logging Fix error on cancelling a fetch. Fix deadlock/major concurrency issue: Never run the heavy callbacks in the memory limited job thread! Can start new stuff before the old job is finished -> boom. Fix not reporting progress. Fix streamGenerator (we *need* to close the stream, or it will stick forever). Support maxRetries != -1, i.e. splitfiles with finite number of retries, e.g. important for fproxy, and give feedback on failed blocks. Support creating binary blobs in new splitfiles. Now the updater successfully creates the blob. Don't call unless need to. Make verifying that the decoded blocks match the expected keys optional (compile time, on in the source), as in the old code, and if it is enabled, do the FEC *encode* and check the blocks before completing the request (fixes updater blob being too short because closed too early). Copy before encryption clobbers the buffer! Factor out padding a byte[] Enable older splitfiles. Track retries if maxRetries != -1 Logging Avoid reporting error if already closed. Don't try to add last block to blob in old splitfile Fix NPE, use the right length Fix retry count handling: MaxRetries=0 (default for FCP) => try each block once (and then terminate! wasn't working!) If a block is not full, it's an error, unless it's the last block of an old-format splitfile, in which case we ignore it (=> we don't use it at all), because it would break FEC decoding. Logging Doh! Really fix retry count handling! Dirty hack fix for infinite loop when ignore last block. Explain. Don't choose to fetch the last data block for really old splitfiles. We may still pick it up by accident however. Use KeysFetchingLocally to not schedule a fetch for a key that is currently being fetched. Ignore failure on blocks already found (race condition?) Fix stalling counting bug in 647ad6c60c76209fd77f5cad120aec9415607573 - requests for old keys would stall forever. Logging Fix non-static logMINOR. Oops. Do loop checking inside onGotKey; this is clearer and allows us to return a slightly more accurate boolean. Rename, explain a little. Remove changedSomething flag, unnecessary, all these odd exits we've clearly changed something. Add a callback for when a segment is restarted due to data corruption. Re-register on corruption, re-check the datastore. Add makeRAF() with initial contents to copy to factory. Factory for PooledRandomAccessFileWrapper's using a FilenameGenerator. Abstract proxy RAF with ability to switch between implementations. Make TempBucketFactory able to create LockableRandomAccessThing's (no encryption yet but otherwise works, if there is plenty of RAM we start off as byte array and then moved to disk later). Use TempBucketFactory for RAFs as well. Doh! When migrated, not using the memory any more. Doh! Check for negative index. Add tests for TempBucketFactory creating RAF's. Just use the usual delete(), like TempBucket's do. Don't use new splitfile code if temp buckets are encrypted. Tests for chooseKey(). Test onNonFatalFailure calling failure here. Always track retries, but if parent.maxRetries=-1 then don't write them to disk. Beginnings of cooldown support. Implement and test cooldown. Remove unused method getCooldownWakeupByKey Fix length after removing "tried" Concurrency issues with callbacks. Check for already finished. Comments. Add a flag to FetchContext to use the new splitfile code (if possible). (Currently hardcoded default on) Turn off for updater for now. Fix progress bar updating on found blocks during store fetch stage but not after that, when it should be vice versa. Config for whether to use new splitfiles will be separate for transient and persistent (eventually). Explain slightly. Make using the new splitfile code configurable for transient requests. Hopefully fix "Unsetting bit but already unset" Always use the old splitfile code on <256MB of memory (because the new FEC and old FEC both use memory, so using both means losing 8MB). Delete cruft Checksums. Missing methods in KeyListenerNew Fix memory leak: Remove KeyListener on completion or failure. Use whichever checksum implementation storage is using. Explain. Checksums. Document synchronization Fix writeTo() (backing array may not be the same size as capacity!) Make ChecksumOutputStream (optionally) actually write the checksum. Check here too. Add checksum tool, will be used for using HMAC's rather than CRC's when encrypted, also tidies a bit. PLEASE REVIEW: Are there standard tools for this? What about unixninja's work? Use ChecksumChecker. Check checksum when reading keys. Pass in keys here, make sure we don't read keys before writing them! Lock in readAllBlocks() Add writeChecksummedTo() and make raf private. Move all the checksum/storage stuff to the end of the file. Permissions. Add equals() Test readSegmentKeys(). Add generateChecksum() Initial support for persistence in Storage. More tests. Restoring between fetches is now working, at least for a single segment. Can write immediately. Fix not persisting metadata. Count failedBlocks on startup too, and fail on startup if it was failed but hadn't been processed yet. >= not ==. failed is false here so we haven't handled it already. Can happen on startup maybe; safer this way anyway. More tests Don't call the callback here, would be confusing and probably lead to NPEs. writeMetadata does not return anything. Persistence flag on storage. Don't write anything except for blocks and keys if transient. Persist more details, not just thisURI Fix wrong length of key list Write sufficient detail to restart from scratch if the persistent request is from an FCPClient, i.e. from FCP or on the global queue. Add flags and checksum type. Add FIXME Imports. Missing file. Add new FetchException: Not enough disk space. Add configuration for minimum available disk space, both long term and short term. Fill the new RAFs explicitly. Write random bytes if crypto is enabled. Move disk limits before tempBucketFactory. Add a RAF factory proxy that checks disk space. Check disk space when creating temp RAFs Deduct the max RAM used from the on disk limit. We don't want it to fail during migration, and we don't want to have to deduct the in memory total as it changes from moment to moment, so this is the best solution. Better out of disk space handling in temp factory. trivial indent Serialize creation of RAFs so that the disk space checks work, and to avoid fragmentation. Add FIXME, this should be per-filesystem. Persistent RAF factory. Fix NPE. Remove getFreeSpace JNI hack, we require 1.6 now. getUsableSpace not getFreeSpace Add an alert for out of disk space. Move InsufficientSpaceException to a top level class and rename to InsufficientDiskSpaceException Convert to a FetchException on out of disk space. Fix typo in f527f5d05d1f1cae939cdbadbef6a33242ee321a Another common "don't know". Check size if known. Check for disk space when writing too. More accurate estimate of length of final file. Translate InsufficientDiskSpaceException -> FetchException. Less aggressive default space limits for now. These will need to be discussed. Move freenet/node/fcp to freenet/clients/fcp. Intentionally breaks db4o-based persistence! I will implement non-db4o persistence and then implement migration by putting back a few files in freenet/node/fcp/ just for migration. Add PersistentJobRunner and PersistentJob. We still need to have some control over when jobs affecting the persistent state can run, in particular we need to be able to stop them while serializing. Implement PersistentJobRunner (but not serialization yet). Change ClientContext.jobRunner to a PersistentJobRunner. We need to keep DatabaseDisabledException (now PersistenceDisabledException). Can't be persistent here. Don't need to go through persistence for this now. Use the PersistentJobRunner, not the DBJobRunner. More PersistentJob fixes. More PersistenceDisabledException fixes Missing file (trivial) Remove blob-file temp buckets. PersistenceDisabledException Fix build. Unpacking containers does not need to happen off-thread (as we can use multiple threads in PersistentJob's at once) and so does not need to have special treatment on restarting. Delete old splitfile fetcher code. Remove useNewSplitfile flag, reverts f404ee82403d3ab1772fc6abe6b52c2ad2c628b5 Remove restart jobs for site inserts, run it directly. Remove *BucketChainBucket*, doesn't make sense without blob-file temp buckets. Remove restart jobs. Another BucketChainBucket use. Add a new object to track InsertCompressor's. Will be serialized with the database. Most of persistence implementation, but not fully integrated yet. Build fix Move PersistentTempBucketFactory init to ClientLayerPersister Fix not writing when we have a burst that lasts just less than the timeout and then a long pause. Increase timeout to 10 minutes. Implement shutdown for ClientLayerPersister (some minor internal changes/refactoring). Bandwidth stats putter Fix shutdown, doh FCP Fix killedDatabase() Minor fixes. Callback called on restarting. onRestart/onRestartedFreenet: Trivial cases. Restart USKs on startup via onRestarted, get rid of older initUSK etc. Register KeyListener's on restarting a single block fetcher. Implement resuming of SplitFileFetcherNew's! Get rid of initKeys Fix USKFetcherTag. Make parent RGA transient ... we will have RGAs transient ... On startup, reregister. The RGAs will be non-persistent. Clarify: onRestart -> onRestartedFreenet Clarify: onRestart -> onRestartedFreenet. Document some assumptions. onRestartedFreenet -> onResume. Thanks operhiem1. Use the job runner, not the executor. Remove RegisterMe. Not needed any more. Fix more DBJob -> PersistentJob. Single RGA tree for all requests, persistent or transient. KeyListener's and callbacks are still per-mode (transient|persistent). It is safe to chooseKey() on any thread, as explained previously. Remove makeBlocks() now that we don't need it Fix counting queued requests for test. Don't need to commit here; fix remaining compile error in CRS. The block is probably for a splitfile, so it will write it anyway to its own storage; we don't need to checkpoint the main storage in that case. If it's a single block fetch it should set the write flag anyway on completion - TODO will check that later. Persist flag for whether the splitfile fetch has checked the datastore, don't check it again on reregistering if we have. Single DatastoreChecker registration queue (but still call callbacks on the jobRunner if persistent). Fix compile issues in lateInitDatabase ... crypto, and passworded init, does NOT work at present. Move preCommit/postCommit hooks to ClientLayerPersister. Remove the database job executor. Doh Remove DBJob, DBJobRunner, Db4oBugs, DatabaseDisabledException. Remove old splitfile inserter code for now. Will be rewritten soon. Remove old FEC. Remove unused method. Remove persistent support in block encoder queue. Will need changes later when I reinstate splitfile insert support. Cleanup. Old cooldown queue hasn't been used for some time, obsoleted by CooldownTracker. Remove it. Remove requeueAfterCooldown() Enable persistent splitfile fetches. watchGlobal* is transient. NullObject not needed. Don't store the infrastructure classes (persistent temp bucket factory, FCPPersistentRoot etc). They will be populated in onResume() (we will read the ClientRequester's individually). Always construct the persistent RAF factory Pass in clientContext to clientLayerPersister. Serialize requests separately. Pass onResume back to client. Fill in the FCPPersistentRoot in ClientRequest.onResume(). Missing argument Move RequestClient generation to the Client*Callback, do not pass it in directly. This allows us to reconstruct the RequestClient on deserialization, e.g. from the FCPClient, even though the FCPClient isn't persistent, and it also simplifies the API. Add realTime flag in ClientRequest Resume the RequestClient Remove unused constructors. (We won't need db4o for much longer) Set the ClientContext Fix serialization Remove now bogus assertion Remove persistent stuff from CooldownTracker. FCPClient is NOT serialized. Explain. Remove ObjectContainer from FCPClient Fix stats serialization. Fix regression in 1a5dfc333d049129f655de9afa827569df1525c7: Don't write past end of file. Remove logging Don't close() twice. Fix fork bomb in PersistentJobRunnerImpl. Ooops! Fix forgetting to close() explicitly and thus not writing anything :( Fix infinite loop. Logging Remove RequestClient.removeFrom() Remove more ObjectContainer stuff from FCP layer More ObjectContainer purging in fcp/ Get rid of ObjectContainer from onMajorProgress Remove ObjectContainer from ClientGetCallback. Remove ObjectContainer from ClientPutCallback (and small related stuff) Remove ObjectContainer from clientdetail etc Remove ObjectContainer from events framework. Remove more ObjectContainer's. Remove ObjectContainer from ClientGetCallback and StreamGenerator Remove SimpleManifestPutter. We will use saces' new multi-container site insert code for everything now. Fix HLSC site insert. Fix FCP site inserts. Remove lots of ObjectContainer stuff and also checkForBrokenClient etc, which doesn't make sense without the database. Fix another overlapped API (getPriorityClass() defined in two APIs, need to change both at once). Remove ObjectContainer from USK callbacks. More USK ObjectContainer removal. ClientGetState and more USK stuff The easy (non-overlapping) cases from HasKeyListener. isCancelled(ObjectContainer) Remove removeFrom() and isStorageBroken() RandomGrabArrayItem interface SendableRequest SendableGet and minor trivial related stuff: ObjectContainer removal. SendableInsert and private helper methods. DatastoreChecker KeyListener WantsCooldownCallback BinaryBlobWriter and ClientGetter. ArchiveExtractCallback: Trivial remove ObjectContainer. ObjectContainer in loop detection Archive (container) code: Remove ObjectContainer, and remove some unused cruft. BaseSingleFileFetcher, SimpleSingleFileFetcher. Remove ObjectContainer from schedulers. (Mostly trivial). All the RGAs are in-RAM and transient but we still do e.g. tripPendingKey on the appropriate executor/jobRunner. SingleFileFetcher: Remove ObjectContainer. SplitFileFetcherNew: Remove ObjectContainer Remove PersistentSendableRequestsSet, always use transient, remove ObjectContainer cruft from it. RequestScheduler (trivial) Create CRSCore and CRSNP at the same time, make keyListeners and sched final, get rid of init method. Remove ObjectContainer from ClientRequestScheduler. Catch Throwable's in save() Don't forget onStarted() in successful path. Logging close() explicitly. Persist the global salt. Remove ObjectContainer from RandomGrabArrayItemExclusionList Remove ObjectContainer from RGA, SRGA. Remove RGAItem.persistent(): One queue for everything, persistent or not. Remove ObjectContainer Remove unused method getBetterNonPersistentRequest. Remove obsolete and now incorrect assertion Fix NPEs in serializing client details Write URLs as text, not as binary keys (can't write USKs as binary keys). Remove ObjectContainer from USKManager Remove ObjectContainer from RequestCompletionCallback (to fix NPE in QueueToadlet) Add Serializable to some classes. Make some Bucket's Serializable. Remove BucketChainBucket. Make NullBucket streams static. Remove ObjectContainer from Bucket and SplitfileBlock. Lots more Serializable. Javadocs. Make a second constructor for the common case. Makes it easier to see the cases where we actually keep the EventProducer. Explain a bit more. Serialize the SimpleEventProducer. Imports. Add serial version. Remove ObjectContainer from FCP messages, in particular removeFrom(). Also add serial version for one message. Remove from events too (precedes last commit) Remove unnecessary ObjectContainer references in listing requests. Remove more ObjectContainer's from PersistentPutDir Remove ObjectContainer from DirPutFile Remove remaining ObjectContainer references in FCP layer More Serializable. Will need to consider whether to implement binary formats and versioning for some of these... Don't forget the salt! Not serializable. Shouldn't be used long term. Explain. Add default constructors for most of the classes that get serialized. (Remainder have more complicated problems) Fix DelayedFreeBucket preCommit/postCommit hooks. Add onResume() API for Bucket. Add onResume() for easy cases (no-op and proxies). Fix DelayedFreeBucket. Missed a Bucket for onResume() Fix TempFileBucket: Pick up the generator on resume. Fix PersistentTempFileBucket. Keep the PersistentFileTracker, that's what we'll need for garbage collection. Serializable. Fix infinite recursion: DO NOT call back into getter.onResume()! Also, javadocs for onResume(). Resume the returnBucket Add onResume() to RAFs Easy cases for onResume() Pooled on-disk RAFs have a flag for persistent, if true, we register on the persistentFileTracker on resuming, if false, persistence is not supported. Call resume on the RAF on new splitfile fetcher code. Add missing serialVersionUID's. Fix NPE Fix NPE. Logging Fix another NPE on failing to resume. Allow onResume() to throw. Don't forget to register Add toString() load after initializing ClientRequestScheduler's. Use the right factory. Logging. Recreate the SendableRequestSet on startup. Move more of the queue code to the selector. Javadocs to explain what goes where. Must add to clientRequestsByIdentifier (and check for reused identifiers). Add to the right scheduler. Fix maxRetries = -1! Use the right CRSBase / salter. Fix not reading Bloom filters! Ooops! Fix not serializing parent fields (in spite of docs saying it should work): Make the parent classes Serializable. Add serialVersionUID's. Fix infinite loop. Call notifyClients() in onResume(). Make the fields it sets in ClientGet transient as they will now be refreshed on startup; so we don't need to save them, less to serialize. Run directly if in onResume() Logging Not persistent any more. Add missing serialVersionUID's. Remove some more ObjectContainer's. Remove some more ObjectContainer's. More ObjectContainer's. Remove ObjectContainer from Snoop callbacks. Remove ObjectContainer from Encodeable interface More ObjectContainer stuff Remove unused method. Remove obsolete comment. Delete obsolete javadocs Delete obsolete comment. Delete trivial ObjectContainer's - imports and objectOn* callbacks. Remove ObjectContainer from crypto. Remove ObjectContainer from keys Remove ObjectContainer from lots more of the code. Remove SupportsBulkCallFailure, no longer necessary. Remove more ObjectContainer's. What remains will be needed for auto-migration. Fix serialization regression in PersistentTempFileBucket. Resume buckets before deleting them. Missing proxy.onResume()'s (fix NPE) Don't start resuming until after we've read everything in. Checksum all objects in client.dat Clear the flag once done. Fix not updating checkpoint time. -> Hopefully will be fewer gratuitous writes. Create DummyJobRunner, pass it in to splitfile code when doing a transient request. -> Don't need if(persistent) jobRunner... else executor... -> Fix immediate problem of writing client.dat when a transient request finishes. Checksum the salt value. Regenerate Bloom filters when necessary. Rename *New -> *, New* -> *. Zero out when the checksum fails. Don't forget to start() the storage. (Important for e.g. checking what segments already need decoding etc) Factor out getSalter() Regenerate Bloom filters off-thread. Allow jobs to run while loading (in onResume()), but don't allow to write a snapshot. Fix tests. Doh, wrong variable - fix NPE Fix tests, missing file. Don't write a checkpoint if we haven't started successfully, even if we are shutting down. If mustCheckpoint is true we must write on onStarted(), otherwise jobs will stall. Slightly more robust. Deal gracefully with onResume() throwing anything. Lots of logging Always checkpoint after restarting due to data corruption / corrupt bloom filters. Consistent naming. Fix NPE creating an insert. Make Serializable. Vector -> ArrayList. Indent. Use ArrayList rather than Vector and fix synchronization issue. Save all the requests we can save, even if one fails. Serializable. Make progressMessage transient here too, it will be restored later. See 1e4b5b4829c79d2ee117354c39cf6e029f6d800e Imports Fix infinite recursion, see 15378c51b70f491fda7ede080437a8dac5b4dd68 Don't forget to call parent onResume() Fix regression in 351282d0248fd56303e14b07f458fd0adabad8b6 Add onResume() for inserts. Ensure that second and later calls to onResume are ignored (fix infinite recursion). Missed one. Only call innerOnResume once for ClientRequester's too. Fix FCP bug (possibly old): Non-global persistence=forever requests didn't report progress, and didn't list on connect. Fix NPE due to bandwidth stats not being loaded Fix persistent stats. Remove db4o-era error handling cruft. This can't happen. Correct returnBucket. Send an error if asks for data only and we don't have any data to send. Clarify. Delete useless dontFree. We were freeing the Bucket but free() was a no-op as we set deleteOnFree=false. Do not store the AllData message. Make PutFailed Serializable. Store the ClientRequest's, not the ClientRequester's. I.e. start at the FCP persistent requests layer. There are places where this is important, e.g. for big completed site inserts we may want to store the ClientPutDir but not the (huge!) DefaultManifestPutter. Write the request identifier separately and use it to skip anything we've already loaded. Not useful yet but will be when we process backups automatically. Plus will help with emergency-restart mechanism for when serialization fails. Remove unused variable charset Use getRequestIdentifier for the basic client details. Explain exactly what getClientDetail() is for and what it should contain. Eliminate postFetchProtocolErrorMessage FCP CHANGE: Remove TempFile. We will use temp files, in the same directory, but ClientGetter and FileBucket handle this automatically. Add tempFileAlreadyExists(). True only for tempfiles where we have created the file already. If it's a tempfile, it must exist before we open it. Check permissions as well. Fix symlink race: Create the file. Enforce and document mutual incompatibility Simplify logic. Use createNewFile() in this case too. Document and add FIXME. Remove unused methods. Short-cut. Automatically move tempfiles in onResume(), and keep a File pointing to the current location of the tempfile in TempFileBucket. Maybe the file is somewhere else ... Remove init() Remove some leftover TempFilename stuff. Remove more TempFilename stuff. Doh, we already created it in getOutputStream()! Fix progress bar, adding to wrong blocks. Rename to clarify. Document. Do not allocate a temp bucket up front for return type = direct. Let ClientGetter allocate it later on, when we know how big it needs to be etc. Only keep the returnBucket if we are doing returnType=direct Make size and MIME type and sentToNetwork transient, and send events for them on resume too. Get rid of getFinalBucket(), getBucket() is now what is needed; fixes getting downloaded data. Explain. Don't ask the getter. compatMessage needs to be persisted. Factor out CompatibilityAnalyser to a separate class. Some changes to how ClientGet handles events. Explain. min/max compat mode are *int* not long. Much more client detail. We will use this for restarting requests if all else fails. Write enough to restart the request, and if possible (in trivial cases, to be implemented soon) to continue it. Write recovery data for each request. Add Bucket.storeTo(DataOutputStream) Store the Bucket if download-to-temp-space. Make some fields transient, and recover randomSeed on onResume(). Note that this class won't be used much in future, we'll use the new crypto stuff. Make fields transient Fix synchronization. Make length transient. We should really get rid of it though. Move StorageFormatException to top level type. Restore FetchContext from DataInputStream. Add isErrorCode() Make sure that isErrorCode is kept up to date. Restore CompatibilityAnalyser from DIS. Restore GetFailedMessage from DIS. Restore requests from getClientDetails if serialization fails. Store the URI! Ooops! Skip the recovery data too. Fix bad default max splitfile blocks per segment. Create the FCPClient early if recovering, because we need it in the constructor. If logging is MINOR, test emergency recovery. Logging Bugfixes for restoring fetch Need to start the requests. Don't fail everything if one resume fails. Make *Bucket.MAGIC public and an integer. We will use these to restore Bucket's that support storeTo() Implement restoring a Bucket from a DataInputStream. Restore downloads to temp space (after finished). Simplify (or at least de-duplicate) reading/writing of checksummed stuff with lengths. Pass in ChecksumChecker to writing recovery data. We can use it to isolate and check individual components where we can recover from failure. Logging (may lose logs if shutting down) Better API Try to isolate the returnType bucket: Separate length and checksum, and if it fails, restart. Always need a CompatibilityAnalyser. Accept events even if finished - needed after restart. Pull the progress data from the ClientGetter directly to avoid race condition issues. Partially reverts 3f402837348cbf3f66914c399dfc8773efd516c6. Synchronization. More synchronization fixes. Hopefully this works in all cases now (including aborting). Slightly better for big files. Doh, remove debugging code. Argh. :| Close stream. Separately write (within checksum/length) fetch failure details, and restart if we can't reconstruct. MIME type *can* be null at least in some failure cases. If it's failed, it's started; fix display when recovered. Fix logging Move default fetch/insert contexts to ClientContext. Avoid leaks Write the FetchContext separately too, and use the defaults if we can't read it. Move StorageFormatException to freenet/support/io Use extensionCheck if necessary. Persist extensionCheck. Don't need to pass in. Keep initialMetadata in ClientGet Fix disk space leak: Free initialMetadata when done. onResume for initialMetadata Store initialMetadata. RandomAccessFileWrapper: Support readOnly flag, don't ask for a mode string. Enforce length limit preemptively in pread() too. Flag to do a plain delete(). By default do a plain delete, option is secureDelete. Option for secure deletion for pooled RAFs Read only flag for PooledRandomAccessFileWrapper too. Doh, readOnly in pwrite not pread Add RAF.storeTo() Restore RAFs from a serialized-to-DIS form. Resume of an RAF can fail. Check in onResume() if possible, use ResumeFailedException. Store the tempfile ID, so we can deal with prefix changes. Move RAF files too. Missing files for checksumming changes earlier. Restore from mid-way through a splitfile. Set a flag to avoid recreating the SingleFileFetcher. Fix condition. Fix inverted condition in secureDelete Missing write. Failed block but don't notify clients is now possible. Don't clobber data we will need on restarting from resuming. Reconstruct progress, MIME type etc from splitfile. Fix writing incorrectly -> failure (writing the fields separately not the same as writeTransientProgressFields any more). Checksum the splitfile restart, fail back to restarting the whole file. Synchronization Don't create shadow bucket unless finished. Fixes size = Unknown. Check that the size is correct. finalLength is not the same as decompressedLength! (Changes format again) Logging Delete obsolete test, we don't zero out in all the factories. Create client.dat.bak. Add fullyResumed() to tell ClientLayerPersister whether we resumed fully or just restarted. Refactor load() prior to implementing backup-reading support Handle the salt through loaded too. Don't write salt twice, once checksummed and once not. Ooops! Move filename check outside. Load from the backup if necessary and if it exists. More robust, and add logging. Doh! Clarify, encapsulate. Logging Slightly more robust, and register nulls when we can't load. Logging Rename. Simplify. Fix NPE Update documentation, explain design decisions a bit. Add the same checks to FetchContext constructor that we already have in serialization. Check for some bogus fields in FCP. Add equals() and hashCode() to InsertContext and FetchContext. Add persistence test for FetchContext. This one is particularly likely to change and have persistence problems. Refactor slightly Add function to DatabaseKey to create an AEADCryptBucket for the ClientLayerPersister (client.dat). CRYPTO PLEASE REVIEW! (This adds another KDF) Beginnings of crypto support for client.dat Use the same lock to serialize loads with checkpoints. Not really necessary yet but it will be... Beginnings of support for setting the files and Bucket's later on, will be needed for password encryption. Don't need to pass ObjectContainer into NodeClientCore any more. Get the flag for whether to load buckets etc from whether we made any progress so far, don't pass it in. Encryption of client.dat should work now ... Make sure we cleanup the old files ... after writing successfully. Logging Logging Logging Move inside the condition and fix the condition; fixes error on new start. Only call completedInit() ONCE Logging BaseFileBucket.onResume() must be called, or we get "Size of downloaded data has changed"; it's okay not to return early in this error case. Logging Check whether serialized-in request is obviously broken Add Serializable to SSKs and keys used in them - should fix SSK fetches. onResume() can throw ResumeFailedException. onResume() can throw ResumeFailedException. Catch RuntimeException's in case of severe serialization issues. Don't register with the FCPPersistentRoot until the rest of onResume() has completed successfully; break innerResume() out so we run even the per-class stuff first. If serialization fails, go back over the file(s) with it turned off, just restoring from scratch. Hence most delayed errors from serialization will now be automatically recovered from. Explain. Serialization. Make fields protected. Fix serialization. We need the parent fields, and can't restore them in no-args constructors, even though some of these classes aren't persistent. Indent. Turn off and on encryption on client.dat on the fly. Logging Remove unused code/imports. MAXIMUM seclevel support (delete client.dat*) Null out after using. Don't store these, they are redundant (breaks serialization of failed requests but will still resume). Use the long message. Bucket.onResume can throw ResumeFailedException, and use this in TempFileBucket.onResume. MultiReaderBucket doesn't persist. Fix NPE Fix serialization, fixes single file fetches involving archive unpacking. Pass in boolean for whether the checkpoint is the final one. Check for killed. Add shutdown callback to make sure metadata has been written. Throw an error here Remove canResume(), bad idea (reverts 6ac750a2c7fc4bbac6b6aca34a94d03c80f89544 more or less) Check for null here. Important for e.g. site inserts. Add FIXME Update the fields on the ClientGetter. Remove SplitfileBlock Remove unused persistent recentSuccesses code Move recentSuccesses to ClientRequestSelector. Remove more ObjectContainer's. Remove unused plugin store migration and old format dump code Remove NullObject Document. Add queueInternal(), allows us to chain a series of jobs while ensuring checkpoint doesn't happen in the middle. queueNormalOrDrop not queueLowOrDrop. Most of the client layer runs at NORMAL, except for e.g. FEC jobs. Use NORM_PRIORITY not LOW_PRIORITY Remove duplicate class SimplePersistentJobRunner Imports. Add getJobRunner(persistent) Run off-thread but atomically. Most of the complicated cases are in SingleFileFetcher. Keep the Key not the URI Report the encoded key if possible in onResume Explain. FEC jobs must not occur at the same time as checkpointing. They may be queued by either the MemoryLimitedJobRunner or the PersistentJobRunnerImpl/ClientLayerPersister, so hopefully this doesn't mean that we'll have to wait for the entire splitfile decode to finish (if it happens all at once, as can happen with cross-segment) before checkpointing. Explain a bit. Make field private Make sure we checkpoint Explain, add FIXMEs Remove persistent from *RandomGrabArray* Logging Don't call onEncode twice If the file is already the correct length, use it. Make sure that it really *is* final. Need to be sure for completion. Explain. Add completion via truncation support to SplitFileFetcher, with a callback for ClientGetter. Not used yet. Implement completion via truncation (ClientGetter side). Make USKProxyCompletionCallback serializable Fix length detection -> fix Unknown size Logging (keep this for now) Delete file on error. Check disk space when creating file for completion-via-truncation. Clear the ArchiveContext when restarting. Cross-segment support. Clarify locking Check for need to decode/encode cross-segments on restart Fix some cross-segment check blocks bugs and simplify dataBlocks + crossSegmentCheckBlocks -> blocksForDecode() Include the total number of data blocks and check blocks in the basic settings, and check it on startup. Write completeViaTruncation flag. Write total cross-check blocks. Keep cross-check blocks separately to the main blocks: Either after the data and check blocks *for a segment* if not completing via truncation, or after ALL the data blocks if we are. This is essential for truncation to work. Heavy logging to see if this does anything. Actually enable cross-segment, ooops Fix NPE This was correct in the code it was copied from :| Fix bugs Bugfix - actually use tryDecode, don't start decoding if already decoding. Fix array bug Fix calculation of locations Pass in DiskCheckingRAFFactory from ClientContext, use it when creating files for complete-via-truncate, don't just cast the provided factory as it won't be the right type if non-persistent. File was way too long! We only need to store the downloaded blocks for each segment i.e. at most the number of data blocks for that segment (they won't necessarily all be data blocks). Now the download tempfile is only slightly bigger than the content. This also fixes a hash error with completion via truncation. Fix NPE in decoding Fix bogus key comparison Logging Basic settings can be pretty big now. Better error Don't try to move it if it's not a persistent tempfile Can definitely be a file other than a persistent-temp-file now. Clarify and fix bugs reading in cross-segment Logging Fix block offsets Log and check block offsets Check a bit earlier for finished Fix segment status offsets Deconfuse! Fix segment status offset Simplify block count code Fix resuming cross-segment downloads Fix resuming cross-segment: Check for blocks after we know which blocks we have! Don't forget about the callbacks. We may lose the blocks etc. May need to decode/encode cross-segments regardless of how we restarted. Explain. Don't write metadata twice. Don't write metadata immediately if checking the store. Delete obsolete comment Consistently use int for block numbers. Metadata uses an int, and in future we may have different FEC algorithms which may need it. We were inconsistent, sometimes using int, sometimes short. Changes the on-disk format. Don't re-encode the block (refactoring onGotBlock a bit) Free data earlier and simplify code. Write metadata lazily after getting a block, even from network. It will get written in 10 minutes or on shutdown, and if we're getting a lot of them in the <10min before a crash then we're gonna re-get them easily enough after it. Fix de-duplication of lazyWriteMetadata jobs Keepalive plugin needs this to be public. Need to update finishedFetcher to be accurate for hasFinished(), though we're not going to close() Missing close() Fix cancellation of cross-segment splitfiles: Need to cancel the cross-segments as well, similar to segments. Fixes not deleting massive files! Fix another disk space leak, trivial bug in changes to DelayedFreeBucket. Don't try to succeed if already failed. Free the RAF when cancelling a completeViaTruncation request. Better fix, always frees the data on cancel now. Better check for finished, hopefully avoid errors. Write a SHA256 hash of the metadata. It's way too big for a CRC to make sense. Cross-segments don't finish (except if they succeed or are cancelled). We can succeed/fail/free as long as they are not actually decoding right now. Logging Fix failure Explain what these are for (now). Don't setHasCheckedStore and thus write to disk if we're failing anyway. Write to disk in-thread, simplify especially in case of failure (DNF with DSOnly=true e.g.) Make sure we fail with the right error code. Check for cross-segments Better handling for DSOnly failure Fix failing in finishedEncoding (this is mainly for DSOnly failures). Distinguish between failed and run out of retries (for cross-segment failure). Implement a fixed maximum error code for fetch/insert exceptions. Persistence for FailureCodeTracker Logging Remove unused methods. Persist the FailureCodeTracker Write errors when changed. Fix unlikely NPE Logging Logging Make Serializable. Fix resuming a FileBucket Need to resume the Bucket Serializable Fix synchronization. Fix not reporting URIs for two block inserts, remove unused code from old persistence. Fix race condition causing requests to show up as Compressing or Waiting indefinitely even though they are making progress Add getCheckBlocks() (Copy from old FECCodec.standardOnionCheckBlocks) Add a specific error for the 64TB limit. Move earlyEncode and getCHKOnly into InsertContext Add another fill method for unit tests Add factory method for InsertContext for unit tests. Document. Enforce length. Add adapter classes for read-only Bucket / InputStream from RandomAccessThing Add read-only proxy RAF (for unit tests and maybe wider use) Rename SplitFileFetcherCallback -> SplitFileFetcherStorageCallback. Implement storage for splitfile inserts, and some tests. Not integrated yet, and cross-segment isn't tested. Logging Check blocks when reading them. Fix allocation of cross-check blocks. Fix reading the wrong block Fix NPE, oops! Test cross-segment splitfiles. Check for changing keys. Turn off cross-segment tests unless extensive is enabled. Debugging code: Encode even earlier. Update comment Track which blocks have been inserted, and complete when all segments have inserted all blocks. Fix NPE Factor out block choice and retry tracking code. Will reuse some of this for inserts. Use SimpleBlockChooser for inserts. Make these fatal, can't retry, be consistent with FetchException Handle failure. Do all this risky stuff off-thread. More statuses, always create Metadata before feeding it to callback. Check for failure or completion before storeStatus() Check status in tests. We must always have a key when a block insert completes. Test completion (success). Make InsertException.mode public Fix NPE Include error codes in final failure InsertException Add tests for simple failure modes. TransientChosenBlock -> ChosenRequestImpl (now used for persistent too) Make sure callbacks are called on the jobRunner if necessary, and always call them off-thread. Always pass in the key in onSuccess, and simplify slightly. Use KeysFetchingLocally.hasTransientInsert for persistent requests too. (We can safely read persistent data structures from any thread, we just can't modify them... I considered keeping this in the individual senders but the memory we save is more important than a few clock cycles) Rename hasTransientInsert etc as it's not transient any more Pass in KeysFetchingLocally to SplitFileInserterSegmentBlockChooser Add chooseBlock(), including KeysFetchingLocally support (i.e. tracking of which blocks are currently inserting). Implement n-consecutive-RNFs-count-as-success hack. Count the RNFs if they are not consecutive. Write the consecutiveRNFs hack data to disk Fix concurrency glitch with failing at the same time as encoding. Test failing while encoding. Need to unlock AFTER calling callback. The callback could modify stuff, it's still part of the persistent job. If shutting down, throw in jobRunner.lock(). Handle this correctly. Always call finishedEncoding() Document. Write overall status i.e. the FailureCodeTracker. ALWAYS generate keys up front. This saves a lot of small random disk writes, which are bad even for SSDs, replacing them with a more or less linear write at the end of encoding check blocks. It wastes a bit of CPU though as we'll have to re-encode later. This is an acceptable tradeoff. Even better, it will allow us to write the segment status lazily. Fix test Fix bug in 3187afbc72fd8f5432a4ca9fab7ebed2a45f8f1d (inverted condition) Write the metadata (errors and the segments' blockChooser's) to disk, lazily, after blocks succeed or fail. Add a dirty flag for overall status (meaning errors) too. Write on shutdown. (Will need to be wired in to SplitFileInserter later) Write offsets just after the main header. Fix length check Keep rafLength, remove unnecessary argument. Pass less in for restoreRAFFrom(). onResume() is called later, don't need ClientContext here. Move ReadOnlyRandomAccessThing into freenet/support/io/ Make ReadOnlyRandomAccessThing wrapper persistent Write more data in header needed for resuming Keep rafLength Checksum the offsets Add preadChecksummed*, same as in SplitFileFetcherStorage, will be used... Set SUCCEEDED before calling callback, helpful for tests Add getFixedLength() Don't write segment number in fixed settings (there is an overall checksum) Add create ChecksumChecker by ID Persist BlockChooser's (except cooldown) Persistence for SplitFileInserterStorage, with a couple of tests. More tests. New interface RandomAccessBucket will be needed for compression etc. Make ArrayBucket implement RandomAccessBucket Missing file for 84e78990755ccd8f46b579371704e45635d414fe Add flag to not delete on free() to PooledRandomAccessFileWrapper Make all the FileBucket's into RandomAccessBucket's. Fix delayedFreeBucket possible infinite loop in test class Make NullBucket implement RandomAccessBucket Make TempBucketFactory create RandomAccessBucket's, and disable crypto. Implement DelayedFreeRandomAccessThing, created from DelayedFreeBucket, which is now a RandomAccessBucket. Make factories produce RandomAccessBucket's Simplify and reduce unnecessary permissions Fix permissions and remove obsolete code Fix migrating RAFs to disk not keeping contents, oops Set hasMigrated to true if we convert to RAF after writing to disk, so we don't break the bytes-in-use counter and leak memory. (Bug introduced in a1438c383015549478513af5aa82e9ddb3fe04b2). More tests. Check for already freed Prevent use of an ArrayBucket after using toRandomAccessThing() Prevent reuse of BaseFileBucket after toRandomAccessThing and check for already freed Check for already freed, check for open streams Document Fix NPE, doh (bug in c39a59450417ced0e261a2c797d0f2f84e325538) More tests. Make SimpleReadOnlyArrayBucket a RandomAccessBucket Use RandomAccessBucket for inserts, not Bucket. Use RandomAccessBucket in InsertCompressor too. Change SendableInsert API: Pass the block number identifier as SendableRequestItem not Object Add a method to convert LowLevelPutExceptions to InsertExceptions. Will be used by splitfile code, should be used by SBI Let BlockItem implement SendableRequestItem, and cache hashCode Add an overall chooseBlock() for inserter, and add FIXME's Add onResume() Implement SplitFileInserter and SplitFileInserterSender. Not wired in to SingleFileInserter yet... Report blocks and update the total number of blocks for the final top block Turn off temp bucket tests with crypto for now. Fix passing in null Fix block counts Actually start it! Enable splitfile inserts. Fix GetCHKOnly etc Count blocks when inserted (missing callback etc) Delete more cruft Logging, better error Rename innerDecode to innerEncode (more accurate) Obsolete FIXME Logging Use the right factory! Return the real block number, fix thinko adding the data block count at the wrong point Refactor for equal paranoia for both paths Use the right BucketFactory. Fix more block number issues Fail here Free the data if necessary, fix disk leak. (Also close it) Add equals() and hashCode() to RandomAccessThing's where it makes sense, and explain where it doesn't. Persistence will need this. Fix storing and loading freed DelayedFreeRandomAccessThing's Fix NPEs when persistent Missing Serializable Doh, revert 93c960675f68558e6be1f8ce279dcb40491a5368 Resume the RAFs first Logging Fix bogus open stream check We need DelayedFreeRAT to have equals() in terms of underlying so we can check it on resuming. Move checking for whether it's the same originalData into the constructor, and always use the passed in RAF so the two are the same Fix typo in restoring cross-segments Fix confusion with cross-segment block numbers vs segment block numbers, also broke resuming cross-segments Fix comment Fix bugs in resuming cross-segments at SFIS constructor level, here issues with the block of bytes containing the cross-segment fixed settings Add a test for cross-segment resumption. Like the other cross-segment test this only runs with -Dtest.extensive=true, as it is fairly heavy/time consuming. Fix FailureCodeTracker not being fully Serializable by converting it to use autoboxing rather than an inner class (slightly more memory churn worst case but in practice probably less as small ints are interned) Comments Imports Comment: no, we don't Update comments re DB4O / persistence, add a few javadocs. Remove now unused class DatastoreCheckerItem Remove transientToString db4o madness Remove more db4o stupidity Remove spectacular db4o stupidity in Compressor Remove ClientRequestSchedulerNonPersistent/Core Rename ClientRequestSchedulerBase -> KeyListenerTracker Remove more cruft Add Serializable, serial version IDs Rename TransientSendableRequestSet to SendableRequestSet and remove the interface Serializable Don't need Encodeable. Remove BackgroundBlockEncoder. No longer necessary. Make FetchException modes an enum Make InsertException use an enum too. getClientDetail should return empty unless persistent Check here too. Explain. Logging Logging Logging and better toString() Move CooldownTracker to ClientRequestSelector, so there is one per CRS; take the ClientRequestSelector whenever either accessing the RGA tree *or* the CooldownTracker. Fixes a nasty stalling bug which caused requests which can fail to never complete. Still rather ugly, I will try to improve this further... Need to setCachedWakeup() here because CooldownTracker doesn't automatically check parents. It should, will look into this soon. Remove db4o cruft Fix comment typo Explain why we need a single lock for the entire tree. Pass in the root rather than the CooldownTracker. Synchronize on root not on this Synchronize in setParent Remove moveElementsTo, and imports. Always track retry count for simple one block fetches Remove use of make(), keep cooldown times on the BaseSingleFileFetcher. Remove HasCooldownTrackerItem, make etc. Not needed post db4o. Locking in setParent() Add getParentGrabArray() to HasCooldownCacheItem Rename HasCooldownCacheItem -> RequestSelectionTreeNode FIXME Get the parent directly, don't store it. Rename newPriorities -> priorities Remove unused class Checks and logging Remove the actual cooldown times cache and keep everything on the RGAs/SRGAs, but keep CooldownTracker for now. Implement (S)RGA.clearCachedWakeup recursively Implement setCachedWakeup recursively too Logging Call clearCooldownTime directly Wake up starter if reach root, implement reduceCooldownTime for SendableRequest Call reduceCooldownTime directly, and only call it if we are actually reducing it (we don't need to e.g. register parents any more). clearCooldownTime() must call parent, fix bug in 7c91ce42d09cf7ff4ea230ef6e1c9793d740872b, also delete duplicate code Rename cooldowntime -> wakeuptime on RequestSelectionTreeNode Update comments Fix TheSeeker's NPE on ReturnType=none Implement shutdown callbacks for inserts. Was already written for splitfile insert but wasn't used. Logging Update comment, add FIXME Logging, comment Create the MemoryLimitedJobRunner in the NodeClientCore Implement a thread limit for MemoryLimitedJobRunner, and a reasonable heuristic for the limit in the node. Make memory usage for FEC configurable too. Increase the default memory limit for FEC if we have a lot of memory Add FIXMEs Reduce the number of threads to half the available processors. There will be some disk I/O as well, but even so we don't want to have a measurable impact on system performance... Get rid of InsertCompressorTracker and restart insert compression on resuming a SingleFileInserter, if it is necessary. Fix not registering PooledRAF's that happen to be persistent temp files Fix garbage collection of persistent-temp files No it isn't. Doesn't need to be serialized Remove excludeSummarily() Pass CompatibilityMode as a CompatibilityMode Clarify and simplify: We convert COMPAT_CURRENT as early as possible Check for COMPAT_CURRENT Clarify and check. It's up to the caller to get the compatibility mode right. Fix the compatibility mode Make sure we always report a URI even if LocalRequestOnly Support calling onEncode twice, check for different keys Call parent constructor, set realTime flag Delete zero-arg constructors. These are NOT needed for serialization and can cause subtle bugs. Add assertions Improve error Logging Keep the persistent temp ID when converting from a PersistentTempFileBucket to a RAF -> hopefully fix losing files Explain and don't call onResume twice for RAFs Use ChosenBlockImpl More robust. Call removeFetchingKey etc in ChosenBlockImpl callbacks so that we always remove after we have at least tried to call the callback, avoid race conditions. Don't call removeFetchingKey etc elsewhere, use ChosenBlockImpl more places. Store as a CompatibilityMode Fix max() (typo) Simplify Store a short code for each CompatibilityMode, and be able to look up by these codes. We will consistently use short for CompatibilityMode's in numerical form now... Use short for modes in compatibility mode analyser, events (changes on-disk format) Use byCode etc Check for valid modes Missed a use of ordinal() Don't send onEncode twice. Check for onEncode twice Ignore onEncode twice Call onBlockSetFinished Make sure we only call overall success once. Fix big containers losing files: When we move a file to an archive ("external" container), we need to reset its name as we won't use the full path when looking it up. Rewrite completion logic again and complete late if collecting binary blob. Don't restart ContainerInserter's on resume unless they've already been started. Hopefully implement resuming for site inserts Fix NPE on resume Check for hashes earlier Constructor isn't used Fix minor regression in 4eaa9d0385d9b7818f4ae9a4fa4b1e288485319e breaking request restarting Fix bogus onEncode error, recording encoded uri wrong If no hashes, return null Implement the wakeup optimisation for inserts. That is, avoid polling by returning a wakeup value of Long.MAX_VALUE when all our block inserts are running and we can't choose another one. We don't do the 3-and-then-wait-half-an-hour thing for inserts which is "cooldown" as it happens on requests, as it doesn't make sense for inserts. Clearer Don't send two lots of messages for each block for site uploads Remove removePutHandlers(), not necessary any more. Don't start ContainerInserter's. start() immediately transitions to an SFI, so onResume() shouldn't be called after starting. Comments Add shutdown callbacks Logging Resume the origMetadata, which actually contains the Bucket's until the ContainerInserter is started. Turn notifyClients back on, it may be called explicitly Remove BaseFileBucket.length evil hack to work around long-gone file.length() bug. Fix not calculating size correctly for site inserts Fix error message just after cancelling: It is possible to call callbacks while stopping (we wait for the actual segment encodes to stop, but can't stop the callbacks) Fix l10n strings Fix another minor insert bug Test splitfile with segments with 128 and 1 blocks L10n error messages Rename FCPClient to PersistentRequestClient Rename FCPPersistentRoot -> PersistentRequestRoot Consistently lock on ClientRequestSelector, and clarify. Partly bug #3865 but also relevant to recent work in purge-db4o branch. Use the right Bucket Comments Add shutdown() Shutdown FEC threads and wait for them to finish during shutdown Reduce the number of EnterFiniteCooldown events/messages Fix not restarting encode Clarify (is already synchronized) MemoryLimitedJobRunner: Fix tracking threads in use! (Ever since the threads-in-use limit came in, possibility of it breaking...) Fix tracking which keys we have, should fix some insert stalling after resuming bugs Add tests for resuming inserts before encode finished. Simplify Write status after encoding a cross-segment Read cross-segment status on startup. Start encoding segments if cross-segments already done Thorough tests for resuming encodes Only run cross-segment tests if extensive Fix logging (not transient) Logging Document and test fixed-length representation for FailureCodeTracker Move calling WantsCooldownCallback to SendableGet, and add getClientGetState() Call the callback regardless reduceCooldown when we recalculate cooldown, not when we fetch it Not necessary here. Logging Fix synchronization when creating events Run innerNotifyClients (formerly notifyClients) off-thread in client/async. Better to do it this way so that events don't get reordered; it's consistent when it runs. Produce all the events off-thread. Explain why we need to run callbacks on-thread and therefore produce them off-thread Always run trySendProgress on-thread. Fix synchronization on lastActivity Combine trySendProgress into receive() Remove obsolete comments Logging This can happen if there's a race condition. Obscure disk corruption error path Test for corner case: even a cross-segment splitfile should work if we feed it just the data blocks (note that this isn't always true for *old* splitfiles) Add ClientContext to onTransition() Checkpoint on transition on ClientGetter Add ClientContext to onTransition for putters too Logging, comments Checkpoint soon on transition in putter too FIXME ClientContext.jobRunner should be a PersistentJobRunner, not ClientLayerPersister Remove onMajorProgress(), not called anywhere. There are other mechanisms for urgently saving to disk. Update comment Add the memory limit for the temp bucket factory to the persistent temp bucket factory's disk space limit. Explain in config Fix minor locking issue/typo/NPE Remove KeyListener.isRealTime() Handle bad splitfiles better Better error handling Make fileRestartCounter persistent, if we test-create the file and then restart...?? Delete the old blob file if present Add minimal FCP classes needed for migration from db4o Missed a file Pass in the NodeClientCore not the FCPServer Partial support for migrating persistent downloads Missing file Don't migrate global queue twice. Migrate later on, avoid NPEs etc Start the requests. Register not resume. Now they show up. Missing serialization Maybe fix double remove from bloom filter Logging Start a bit later, avoid NPEs etc Logging Don't remove from filter twice. Track whether we've successfully migrated everything... Check for whether the target file exists in ClientGet constructor. Allow clobbering a zero length file Logging/errors Return true ONLY if we actually wrote the block Support resuming if successful and if normal failure (but not failure to rename) Remove unused rename failed code. The rename happens in the FileBucket and it will call us with a bucket error if we fail. Migration for failure case where we failed to rename (this doesn't exist in the new code) Fix bug in migration Logging Delete old database when migrated. Don't create an empty database Slightly quieter if no database Remove defrag support Don't migrate from encrypted to plaintext or vice versa, and don't create a new database. Missing files Fix NPE when migrating downloads to temp space Fix filename Get rid of database encrypted flag Remove alerts for not enough space for auto change crypt Avoid NPE Fix hasDatabase() Remove autoChangeDatabaseEncryption, config option and warnings Set up keys earlier on, regardless of whether we actually need them (mostly we will) Checkpoint ASAP after migrating. Fix bizarre class loading issue setCheckpointASAP() must actually start a checkpoint if possible. Otherwise we can get long delays. finishInitStorage should be treated as a PersistentJob Persist the fields, relying on ClientGetter was a mistake. Also fixes restarting after migrating a download-to-temp-space. Imports Maybe support binary blobs in resuming requests Pass NodeClientCore rather than FCPServer here too Implement migration for inserts. Binary blobs Minor persistence issues in migrating Turn off exceptions on not storable for migration Back compatibility hack for InsertContext CompatibilityMode Fix blocks in segment limit being bogus too. Create base interface DelayedFree for both DelayedFreeBucket and DelayedFreeRandomAccessThing, write a single group of DelayedFree's. Split DelayedFreeBucket into DelayedFreeBucket (not random access) and DelayedFreeRandomAccessBucket Doh Use the right interface in the constructor Add a method to convert Bucket's to RandomAccessBucket's, efficiently if possible. Use this here. Should allow recovering from uploads from encrypted temp buckets. Don't allow it to be freed via the DelayedFreeBucket. This is especially a concern with java auto-closing. Imports Locking for DelayedFree*: Avoid calling delayedFree inside lock, but lock when accessing the flags Pass in readOnly to factory method Move RAFBucket into src/ Fix wrong bucket type for DelayedFreeRandomAccessBucket (bug in recent code) Pass in fg and persistentFileTracker to restoreFrom(), just like for RAFs Persist RAFBucket's size() should not throw. Use Metadata.toBucket() instead of doing to bytes and immutable bucket. Write to Bucket rather than writing to bytes first. May be slightly more disk I/O (because the bucket streams aren't buffered, but they should be), but may save a lot of memory in some cases. Add method for length of metadata if we don't actually need to keep the bytes Missed a RandomAccessThing in ef45e43ab57aa875d19dfa13c6fdb797a1125ba4 Use a proper Bucket for the metadata for redirects Hack to make RAFBucket's RandomAccessBucket's Use the right factory! Bug might have broken some persistent site inserts. Doh Can throw. But at least it can be persistent. Fix build for ef45e43ab57aa875d19dfa13c6fdb797a1125ba4 Open read-write if we have initial contents to write. If read-only, don't allow changing length Make migrateToDisk() return a boolean, and check for repeated migration before migrating, not after. Still need to call _hasFreed once only Change the contract of toRandomAccessThing to make it possible to reuse RandomAccessBucket's: The Bucket is still usable, but read-only, and the RAT is also read-only. Freeing one frees the other. Add unit tests. Fix resolve() Don't leak buckets Doh, fix bug in 3b356d7eac0429134be3f6d4d76ac9610e964d2f Migrate failed and succeeded inserts Fix progress Logging / check for not existing data Make data final Remove freeData() Move ManifestElement to freenet/support/api, but keep the old one for migration for now Implement migration of site inserts Resume the files to avoid being deleted Fix PersistentPutDir and migrating from an insert with temp buckets Logging Logging / race condition Add shuttingDown() and logging We create the file as a marker, so it does exist, we need to not fail on it existing. Might fix weird failures on Windows. Delete obsolete logging, always try to load at this point Move RandomAccessBucket to freenet/support/api/ (it's used for inserts, really it's part of the API not the implementation) Check for available disk space periodically in persistent temp buckets Call finishInitStorage even if container == null, need to GC persistent-temp/ Fix restarting finished inserts Buffer streams from Bucket's but add getInputStreamUnbuffered and getOutputStreamUnbuffered to provide unbuffered streams to avoid double buffering wasting memory. Don't need to buffer here Imports. Fix Buffered*Stream for compressors More stream buffering around decompressor threads etc More Buffered*Stream stuff Buffer before reaching the disk space checker Use the same buffer size for the buffer as for checking for free space Delete DelayedFree*'s immediately if they haven't been written to disk. If they have been, we must wait until after the next commit. Doh, critical bug in recent bucket changes Show an alert while migrating Logging Fix bogus "MIME type is null" warnings Keep the MIME type from the error message if possible Keep the MIME type from the error Keep the MIME type even if we fail because we don't like it Always delete node.db4o* after migrating, even if it fails. Don't imply client.dat is small. Wait for the actual write to finish before we delete node.db4o* Make UPLOAD_FROM into an enum Move ReturnType to an enum too. Move ReturnType to ClientGet Move UploadFrom to ClientPutBase. Remove some unnecessary code and imports from the migration code Move persistence to an enum too Persistence is always forever for migration code Remove db4o cruft: Don't clone FreenetURI's Fix using the wrong variable Make Metadata document types an enum Move splitfile algorithm to an enum Fix resuming InsertContext splitfile algo's Panic support for client.dat* Javadocs for register() Wipe the master.keys file too Fix breaking FCP, regression in 6e9129296e99ac33fa9994fa88112c71fa9c8d68, also get rid of PersistentGet.PersistenceType Don't write to disk if panicked! Lie about the number of blocks needed for cross-segment splitfiles, to avoid going over 100%. Also fix a dodgy block count in start() Fix return value and thus fix test Fix test Fix unit tests (regression in commit adding getOutputStreamUnbuffered) Logging Logging Fix splitfile insert cooldown breaking leading to inserts stalling, with test. We were setting a flag when no blocks were available, and clearing it when new blocks were encoded, but we were NOT clearing it when blocks fail and become retryable! Unnecessary here because failing Test both code paths. Fix splitfile insert callbacks being run on the wrong job runner. The fact that the SplitFileInserterSender is not persistent doesn't matter, that's a leftover from before I got rid of db4o (for a while I thought we'd need to have both db4o and the new stuff). Imports. Sanity check Only show the migration alert if we're actually going to migrate Delete testing cruft Rename RunningTransient to RunningInserts, no longer transient only Rename to get rid of incorrect Transient and to get rid of Fetching (an insert isn't fetching!) Don't need schedCore/schedTransient Rename removeRequest -> chooseRequest Javadocs Remove unnecessary arguments Javadocs Merge QueueItem and TransientItem, no longer transient Explain Rename as not just transient any more Fix the merge. I need the merge because I need bcprov 1.51 for the new crypto code. Add junit 4 to eclipse classpath Backport to Java 1.6 (and turn off BitSet ops). Two reasons: 1) Debian stable Eclipse doesn't support Java 1.7 syntax. So it's easier for me to backport it. 2) We should release purge-db4o before requiring Java 1.7, as for a large proportion of users the download queue is unstable with Java 1.7, so we want to maximise the probability of it actually migrating successfully to the new code, as well as give them a chance to try it with Java 1.7. Remove extractSmallerArray (seriously broken, wrap(buf, *, *).array() == buf), and replace with Arrays.copyOfRange. DO NOT use ByteBuffer.array(). It's only safe *SOMETIMES*; if an API declares that it takes a ByteBuffer it needs to accept any ByteBuffer, including one which has arrayOffset() != 0 and capacity() != array().length. Add Fields.copyToArray() to do what is intended. Support even ByteBuffer's that don't have a backing array, but untested. :| Clarify the contract: We never modify the original data. Fix this for RijndaelPCFB case. Clarify the API for the ancient class PCFBMode: We always encrypt the data in-place, we do NOT copy it and do not allocate a new buffer, so it's best not to return a byte[]. Optimise a bit, we CAN use array() here but only if we're smart about it Clarify (it can have an IV, but not supported here) Spelling 512 bits not 512 bytes Closer to constant time verify for ByteBuffer's. ByteBuffer.equal() is unlikely to be. Use JUnit 4 Fix bad ordering in unit test Fix another test. For PCFB or CTR, the 3 encryptions won't be the same. Fix another unit test - this one WAS caused by my changes. Fix test This is still wrong... Don't let migration block startup. Hence we will allow local requests during migration. Only remove the alert if successfully migrate Paranoid catch / log for some odd uncaught failure in migration, seen in the wild. Convert NPE to ERROR on calling completedInit() twice Fix back compatibility of keys: Only fill in the top block data if we have a top size; at the moment we never have a top size, because of a bug. I will add a bug to actually implement top block data. Don't need to migrate before starting plugins and IP detection Fix bogus cast Work around FileInputStream.skip() throwing a java.io.IOException: Invalid argument Logging Remove from closables list when locking, ooops! Expose closeRAF() for tests Tests for removing from closables when locking Tidy up slightly. Always remove from closables, avoid closing self and re-opening Rename and document OPEN_FDs -> totalOpenFDs Improve sanity check Load the clients before migrating them, and don't use constraint. Possibly slightly more robust for migrating corrupted queues. Rename Write the client.dat after we migrate the global queue, and also after we migrate the per-application queues. Add logging to test a theory re stalling migration Maybe fix stalling during migration: Notify when runningJobs = 0, even if we're not going to write straight away. For previous code, waitAndCheckpoint would likely never return if a job was already running. Don't checkpoint twice Explain. Add a unit test for waitAndCheckpoint() Even more logging (maybe unlock could break somehow?) Increase runningJobs after successfully started job Make sure the unlock gets called even if the callback throws Add COMPAT_1466, 1416 allows bug for bug compatibility with 1465 (specifically don't set topDontCompress and topCompatibilityMode for splitfiles). Remove nodeDBHandle Remove obsolete class PluginStoreContainer (we don't keep plugin data in database any more since a while back) Remove ArchiveExtractTag, not used post db4o Add some more tests for cancellation Don't encode if cancelled Fix synchronization issue - using wrong lock! Don't log failing with disk error if already failing / race condition Check for cancelled when starting encode Check that we don't go and encode everything before returning failure Check for already closed here too Could be already succeeded. Sometimes this is harmless, sometimes it indicates we are sending inserts when we're supposed to have finished. Check for double insert in encodeBlock() Set closed here too Check for impossible error (something very odd happening) Fix metadata back compatibility (include cross check blocks in topRequiredBlocks) Revert "Check for double insert in encodeBlock()" -> Slightly risky locking wise and not needed ... Fix finalize() on a TempBucket killing the LockableRAF. Thanks TheSeeker for persistence on this one! Fix unit tests Logging Doh, really fix the CHK not compatible bug: Total blocks was wrong this time. Handle newer compatibility modes without losing the ability to read the metadata Indent. Make inserts use COMPAT_1416 for now (** MUST REVERT IN 1467 **) Change Metadata to keep topCompatibilityMode as a CompatibilityMode (unknown == 0 so same behaviour). May not be back compatible! Check for COMPAT_CURRENT Check for COMPAT_CURRENT here too Use CompatibilityMode's in CompatibilityAnalyser and SplitfileCompatibilityModeEvent too. Might break serialization but shouldn't break downloads; non-splitfile downloads may restart. Inserts may break. Accurately detect largish 1416 inserts (because of the bug) Don't resume returnBucket, ClientGet will do it for us. Explain CompatibilityMode, and delete obsolete comment. Explain. Fix intermittent test failure Increase default disk space limits Revert "Make inserts use COMPAT_1416 for now (** MUST REVERT IN 1467 **)" Centralise choice of default compatibility mode Set default compatibility mode to COMPAT_1416 until after 1466 has shipped (*** FIXME MUST REVERT ***) Explain COMPAT_DEFAULT Include compatibility mode in PersistentPutDir Logging More logging Remove unnecessary indirection Logging This is actually an error... Revert "Revert "Check for double insert in encodeBlock()" -> Slightly risky locking wise and not needed ..." Locking Logging/toString Store the SendableRequestItemKey's directly, we don't need the SendableInsert. Remove getSendableInsert() dependency from SplitFileInserterStorage to SendableInsert It is essential to only resume once for inserts, or really really bizarre things can happen. Logging Fix ConcurrentModificationException Fix breaking EarlyEncode inserts (single block) in c0f368b8cbf2d8b27a1bb0091a0f2a56d5915ccf Fix SimpleSendableInsert regression in 3928bb243e01b1076771f9a3a98064258b03ec08 Add wakeup for binary blob inserts, don't try (and fail) to schedule SimpleSendableInsert's if already scheduled. Explain. Cleaner API: Return byte[] from encrypt(buf,...)/decrypt(buf,...), which will always copy the input buffer. Some tests for KeysFetchingLocally implementation Rename encrypt/decrypt to encryptCopy / decryptCopy. It's essential that this is absolutely clear, especially in the ByteBuffer case. Add test for encrypting/decrypting direct bytebuffers Fix abuse of ByteBuffer's Delete bogus obsolete comments Throw rather than logging. This is however impossible. Implement new API, with the basic operation being decrypting or encrypting between two buffers, and add unit tests. Simplify, and use NIO more correctly Add encrypt(ByteBuffer, ByteBuffer) and decrypt(ByteBuffer, ByteBuffer). Add some unit tests for ByteBuffer to ByteBuffer crypto Document API better and check postconditions Mark stream ciphers as such Use update() not doFinal(): A CryptByteBuffer is a single stream, be consistent. (IMPORTANT) Check number of bytes encrypted Check here too. Test encrypting one byte at a time and a random number of bytes at a time Improve tests Make encrypt/decrypt "copy safe", i.e. support overlapped input and output Fix typos Add FIXME Fix javadoc Throw, don't log Use fixed charset, and throw if not available Throw rather than logging and causing an NPE Throw uncaught exceptions in bug cases rather than logging and silently breaking / NPEing / not encrypting Fix more bogus Logger.error's Constructors that take a key but not an IV should not auto-generate an IV Rename variable Rename genIv and setIv to genIV and setIV Imports Remove unnecessary cast and this. Copy some of the RandomAccessThing infrastructure, including tests, from purge-db4o Throw an IllegalArgumentException not an IOException for reading from negative offset. Enable thorough tests for ERAT based on purge-db4o's RandomAccessThingTestBase Merge LockableRAF and make ERAT implement it (but no store/resume yet). Fix build for LRAF changes, copy in RAFW changes for LRAF Add master secret (for tempfile crypto) to master.keys. Separate byte[], will be generated on migration. Implement store/resume to/from Data*Stream Add test for RAFW from purge-db4o Test storeTo() for RAFW Test persistence via storeTo/restoreFrom Make RandomAccessFileWrapper serializable Implement serialization for ERAT (make everything transient except type/underlying/etc, and recreate in onResume()) Add RAFW.hashCode() and RAFW.equals() Test equals() in restore Add equals() and hashCode() to ERAT and check for equality Missing file from purge-db4o Add MasterKeysTest, and fix MasterSecret persistence. Fix changing password and add unit tests for it Remove duplicate writing code Simplify writing code New format for master.keys, auto-migrate from old format Implement iteration for nontrivial passwords. Make sure we don't go over iteration limit Longer hash We want a header not a footer after all. (Breaks one test) Pass in a flag for whether the file is new or not. Fix error message Update test harness Update from purge-db4o Update from db4o (need RandomAccessBucket aspect) Need RAF* Add new class EncryptedRandomAccessBucket. This uses the same crypto as ERAT, and writes a header, so it can be converted to an ERAT. Pass in master key to restore methods Add FIXME, javadoc Fix ERABucket for get*StreamUnbuffered ERABucket.onResume() Use the bitmask, not the name Persist ERATBucket's Fix build for 5651e8932b0e1894a170df8b76cecd48e7e28891 Test persistence of ERABucket's via storeTo/restoreFrom Serialize ERABucket's Add equals/hashCode to FileBucket, TempFileBucket, ERABucket Encrypt temp buckets if needed Synchronization re encryption Add FileRandomAccessThingFactory API, use it where we need to create a RAF from a File, don't abuse DiskSpaceCheckingRAFF for this purpose. Move it up to NodeClientCore Missing lock Use a MaybeEncryptingRandomAccessThingFactory for persistent temp RAFs (TempBucketFactory already does this for non-persistent) Call setEncryption() Pass in transient MasterSecret at construction to TempBucketFactory Pass in persistent master secret to factories when load encrypted persistent layer Set up the master secret in the common case Test encrypted buckets for TempBucketFactory. Add missing file MaybeEncrypting* and rename to MaybeEncrypted* Run separate tests for tempbucket->RAF with and without crypto Add FIXME Add TrivialPaddedRandomAccessThing Pad persistent encrypted tempfiles Encrypt and pad oversized RAFs created by TempBucketFactory Cleanup slightly: TempBucketFactory doesn't need strong RNG Doh Don't pad if don't need to Doh! Trivial but nasty bug in padded RAT TrivialPaddedRandomAccessBucket Pad tempfiles created as Bucket's by TBF For persistent buckets too Rename TrivialPaddedBucket -> PaddedBucket (it is definitely not trivial), and the others too Other way around (pad after encrypting) RandomAccessBucket.createShadow() should return a RandomAccessBucket Enable splitfile fetches when encrypted tempfiles enabled Serializable Tolerate double close Fix skipping at end, add FIXME probably not needed Pass in keys earlier Set up encryption for new RAFs even if config already set Fix resuming ERAB Add equals() and hashCode() to PaddedRandomAccessThing Logging Better error Synchronization may be needed in resume check Legal to resume twice, should be handled specifically when it matters. Try to catch shutdown hang Fix tests post merge Fix names to exclude abstract base test from junit build via ant Move RandomAccessThing to freenet/support/api (RandomAccessBucket is there already) Move LockableRandomAccessThing too Move the factory too Rename RandomAccessThing -> RandomAccessBuffer, and rename lots of classes accordingly (also fix a couple that had odd names e.g. pooled file) Update javadocs. Missed a couple of classes. Need to explicitly clear keys Logging Fix not sending AllData on ReturnType=direct persistent fetches Imports. Refactor, create persistent secret earlier Create the secret for MAXIMUM seclevel Add disableWrite() which sets started = false, and don't write if !started, even if previously started was true. When setting MAXIMUM physical seclevel, stop writing, wait for write to finish, and delete client.dat* Check here too Rename started -> enableCheckpointing Separate boolean "loaded" Don't even try to checkpoint if we're not going to write Explain. FIXMEs, comments Keep MasterKeys object once we've loaded it, don't try to clear keys. Simplifies code, doesn't reduce security (we keep all the secrets elsewhere anyway), and will help with fixing MAXIMUM -> LOw Create the keys file when going from MAXIMUM to NORMAL or LOW Don't need container Fix loading passworded client.dat when node.db4o does not exist. Check earlier Better error reporting If not checkpointing, run jobs immediately Always write after starting Fix NodeClientCore.killedDatabase() If start up at MAXIMUM, delete node.db4o* Create a salt if MAXIMUM (fixes MAXIMUM seclevel) Start even if MAXIMUM seclevel Must clear writing flag Trivial wrong condition Clearer Fix GCing files before starting up with HIGH (passworded) seclevel: Only call finishInitStorage() after decrypting. Fix not loading client cache (recent regression) Want to call lateSetupDatabase if databaseAwaitingPassword, regardless of old database, and clear the flag when we have. Create the database key when creating keys Clarify and fix changing password when changing mode Delete bogus condition, avoid creating client cache repeatedly Logging Don't finishInitStorage() twice Fix resuming ContainerInserter's (for more complex site inserts), and document exactly what can be in the maps Fix NPE Run off-thread even if compression disabled Synchronization needed on cancelled, started Delete now unnecessary indexing params. Some of this was wrong (e.g. new FCPClient not old FCPClient). Keep it as it was. Logging Major crypto optimisation when writing small nearby parts of a large ERAB file, as in setting up a big splitfile. Add FIXME Make waitAndCheckpoint return if a checkpoint runs on another thread. Which it will if the job runner is busy. Logging Do not delete on finalize for TempFileBucket's used internally by TempBucketFactory. Leave that to the TempBucketFactory. Set mustCheckpoint in waitAndCheckpoint() to speed things up a bit. Document an important API glitch Persistent BaseFileBucket's must NOT deleteOnFinalize() Fix free-on-finalize for TempBucketFactory: Keep a reference in TempRAB to the TempBucket, preventing finalize() on the TempBucket unless the TempRAB is also unreachable, enable free on finalize for TempBucket's Free on finalize in TempRAB's iff they were NOT migrated from TempBucket's TRACE_BUCKET_LEAKS support for TempRAB's Remove obsolete FIXME Remove deleteOnFinalize. This has bad/complex interactions with toRandomAccessBuffer() and isn't used except in the updater where it isn't really needed. TempBucketFactory implements it separately. And we can't use it for persistent buckets. Pass in Random rather than RandomSource to MasterKeys and DatabaseKey, and pass in the Node's global SecureRandom Throttle fetches from the datastore in SplitFileFetcher, let notification through every 200ms or 100 blocks. Rename toRandomAccessThing -> toRandomAccessBuffer, thanks TheSeeker Rename underlyingThing -> underlyingBuffer Fix comments/logging Imports Logging: Free by finalize is a leak Another Thing More Thing's Rename another Thing Fix FCP PersistentPutDir not working if encryption enabled for a site insert from temp space Fix paused priority class Rename MINIMUM_PRIORITY_CLASS -> PAUSED_PRIORITY_CLASS Fix paused priority class: Only select requests of *higher* priority than the minimum. Another bug preventing pause priority from working Remove unnecessary maxPrio argument Fix priority handling for site inserts (PutHandlers have the same priority as the parent site insert). Implement bug #4397 : Fast priority changing. Useful in itself but also needed for next fix. Reinstate priority on simple mode Generics More generics. Document the tree better Simpler clearer shorter generics, and get rid of some casts. Missing file Rename Group requests by ClientRequestSchedulerGroup rather than ClientRequester, hence a site insert is a single group (even though PutHandler is ako ClientRequester). Fixes changing priority of site inserts and also makes them compete equally on the global queue with e.g. ordinary inserts. Fix problems introduced by generics changes (arrays) Fix NPE Pass in persistent root and global forever client earlier, fix NPEs so we have reasonable FCP functionality prior to entering password Missing string Detect persistence disabled and send appropriate error. Race conditions might be possible (-> NPE) but I don't think it matters. Copy the right number of bytes even if something breaks. Should fix bug #2444 too. Better error Check size so throw early Fix creating empty tempfiles without a length and then passing into PartiallyReceivedBulk (API change, we enforce length now) Start the plugins off-thread. Don't let them block the rest of startup. If they are loaded from Freenet, they could take a long time to load, especially if they also block IP detection! Fix another password-not-loaded FCP NPE Make status cache final, create it with the PersistentRequestClient if it is the global queue Simplify Fix clobbering old logs on startup. if(db != null) is bogus, use if(!clientCore.killedDatabase()), which is the modern equivalent: Have we loaded the client layer from disk? Can be called twice, this is OK. Clarify: We are interested in whether the database has been loaded. TheSeeker (1): Fix favicon.ico on the /feed/ screen unixninja92 (30): Added KeyGenUtils, MessageAuthCode, CryptBitSet, and ERAT plus supporting type enums. Added Tests for ERAT and MasterSecret. Renamed CryptBitSet to CryptByteBuffer. Wrote unit tests for MasterSecret Documented MasterSecret Wrote unit tests for ERAT and fixed some bugs that were found becasue of it. Worked more on ERAT unit tests Check for fileOffset < 0 in ERAT Fixed several bugs in ERAT ERAT now performs sucessful round trip! More ERAT tests More ERAT testing and preping to document ERAT Documented ERAT Added Licence Notices Finished adding Licence Notices Documented EncryptedRandomAccessThingType Documented key derivation in KeyGenUtils Added more unit tests for ERAT Added ivSize documentation to KeyType Added comment explaining that the SuppressWarnings("deprecation") at the top of CryptByteBuffer is for uses of CryptByteBuffer.RigndaelPCFB. Added hasIv to CryptByteBuffer for better readability. Fixed -1 that should have been null in CryptByteBufferType Check for unsupportedType in getIV() in CryptByteBuffer Fixed comments in CryptByteBuffer Changed extra catch statements to multi-catch Moved suggested algorithm/type choise from var to docs Converted String comparisons to .equals from ==s. Removed unused classwide var in ERAT Cleaned up ERAT Made isClosed volatile in ERAT and made more vars final. Made cipherPrams final in ERAT
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Devl mailing list Devl@freenetproject.org https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl