Janek Warchoł <janek.lilyp...@gmail.com> writes:

> On Fri, Nov 9, 2012 at 10:19 PM, Thomas Morley
> <thomasmorle...@googlemail.com> wrote:
>> 2012/11/9 Janek Warchoł <janek.lilyp...@gmail.com>:
>>> On Thu, Nov 8, 2012 at 9:21 AM, David Kastrup <d...@gnu.org> wrote:
>> [...)
>>> Sorry for being grumpy - i'm frustrated with some other things, for
>>> example the fact that all my data got deleted because of a script bug,
>>> and the fact that the behaviour of 'rm' makes it very difficult to
>>> recover anything.
>>
>> OUCH!!
>>
>> No idea what to do, but I wish the best.
>
> I had a 2-week old backup, and i managed to recover some files created
> after the backup (unfortunately it seems that SSD drives fragment the
> data heavily, and it makes recovery difficult).
>
> What really annoys me is the fact that rm has no reasonable safeguard.
>  Using rm -i (prompts before every removal) is really not an option,
> especially for using with scripts - it's too annoying when you're
> asked for confirmation every time.  What i'd like to see is a
> safeguard only against deleting too many files, for example:
> - ask for confirmation when attempting to delete more than 10000 files
> - ask for confirmation when attempting to delete more than 10 GB of data
> or even better, make numbers above customizable.  How come no one had
> this idea before?  Maybe it's worth suggesting it to people
> responsible for coreutils?

rm is not a file manager.  I do larger renaming/removal workloads using
Emacs (hardly surprising), but there are also other file managers.

Some file systems retain data until space is required.  btrfs?

> On Fri, Nov 9, 2012 at 10:31 PM, David Kastrup <d...@gnu.org> wrote:
>>
>>> But nevertheless, thanks for your explanations; i do appreciate them.
>>> Sorry for being grumpy - i'm frustrated with some other things, for
>>> example the fact that all my data got deleted because of a script bug,
>>
>> Checking into git occasionally makes things easier.
>
> I was thinking about having all my files in a git repository, but
> that's ~10 GB of data, and lots of it is in a binary (i mean,
> non-diffable) form.  Do you think it would make sense to use git for
> that?

It's pretty efficient for storing even binary blobs.

>
>>> and the fact that the behaviour of 'rm' makes it very difficult to
>>> recover anything.
>>
>> touch ./-i
>>
>> is a trick from old times that helps against things like writing
>>
>> rm * .o
>
> umm, what does it do? i don't see -i among available options of
> 'touch',

-i isn't an option of touch.  It is an option of rm.  The touch places a
file -i in the directory.  At least with POSIX sort order, this is bound
to come rather early in a directory listing, so if you have files a, b,
c in the directory,

rm * .o

expands into

rm -i a b c .o

It does not help much if you have a sort order where - gets ignored,
obviously.

-- 
David Kastrup

_______________________________________________
lilypond-user mailing list
lilypond-user@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-user

Reply via email to