On 2014-01-03 11:14, O. Hartmann wrote:
> On Fri, 3 Jan 2014 14:38:03 -0000
> "Steven Hartland" <kill...@multiplay.co.uk> wrote:
>> ----- Original Message ----- 
>> From: "O. Hartmann" <ohart...@zedat.fu-berlin.de>
>>> For some security reasons, I dumped via "dd" a large file onto a 3TB
>>> disk. The systems is 11.0-CURRENT #1 r259667: Fri Dec 20 22:43:56
>>> CET 2013 amd64. Filesystem in question is a single ZFS pool.
>>> Issuing the command
>>> "rm dumpfile.txt"
>>> and then hitting Ctrl-Z to bring the rm command into background via
>>> fg" (I use FreeBSD's csh in that console) locks up the entire
>>> command and even worse - it seems to wind up the pool in question
>>> for being exported!
>> I cant think of any reason why backgrounding a shell would export a
>> pool.
> I sent the job "rm" into background and I didn't say that implies an
> export of the pool!
> I said that the pool can not be exported once the bg-command has been
> issued. 
>>> I expect to get the command into the background as every other UNIX
>>> command does when sending Ctrl-Z in the console. Obviously, ZFS
>>> related stuff in FreeBSD doesn't comply. 
>>> The file has been removed from the pool but the console is still
>>> stuck with "^Z fg" (as I typed this in). Process list tells me:
>>> top
>>> 17790 root             1  20    0  8228K  1788K STOP   10   0:05
>>> 0.00% rm
>>> for the particular "rm" command issued.
>> Thats not backgrounded yet otherwise it wouldnt be in the state STOP.
> As I said - the job never backgrounded, locked up the terminal and
> makes the whole pool inresponsive.
>>> Now, having the file deleted, I'd like to export the pool for
>>> further maintainance
>> Are you sure the delete is complete? Also don't forget ZFS has TRIM by
>> default, so depending on support of the underlying devices you could
>> be seeing deletes occuring.
> Quite sure it didn't! It takes hours (~ 8 now) and the drive is still
> working, although I tried to stop. 
>> You can check that gstat -d
> command report 100% acticity on the drive. I exported the pool in
> question in single user mode and now try to import it back while in
> miltiuser mode.
> Shortly after issuing the command
> zpool import POOL00
> the terminal is stuck again, the drive is working at 100% for two
> hours now and it seems the great ZFS is deleting every block per pedes.
> Is this supposed to last days or a week?
>>> but that doesn't work with
>>> zpool export -f poolname
>>> This command is now also stuck blocking the terminal and the pool
>>> from further actions.
>> If the delete hasnt completed and is stuck in the kernel this is
>> to be expected.
> At this moment I will not imagine myself what will happen if I have to
> delete several deka terabytes. If the weird behaviour of the current
> system can be extrapolated, then this is a no-go.
>>> This is painful. Last time I faced the problem, I had to reboot
>>> prior to take any action regarding any pool in the system, since
>>> one single ZFS command could obviously block the whole subsystem (I
>>> tried to export and import).
>>> What is up here?
>>     Regards
>>     Steve
>> ================================================
>> This e.mail is private and confidential between Multiplay (UK) Ltd.
>> and the person or entity to whom it is addressed. In the event of
>> misdirection, the recipient is prohibited from using, copying,
>> printing or otherwise disseminating it or any information contained
>> in it. 
>> In the event of misdirection, illegible or incomplete transmission
>> please telephone +44 845 868 1337 or return the E.mail to
>> postmas...@multiplay.co.uk.
> Regards,
> Oliver

Deleting large amounts of data with 'rm' is slow. When destroying a
dataset, ZFS grew a feature flag, async_destroy that lets this happen in
the background, and avoids a lot of these issues. An async_delete might
be something to consider some day.

Allan Jude

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to