The cleanup scripts enforce a sort of "lifetime" for the datasets.

The first time they're run, they may mark a dataset as deleted and also
reset the update time and you'll have to wait N days for the next stage of
the lifetime.

The next time they're run, or if a dataset has already been marked as
deleted, the actual file removal happens and purged is set to true (if it
wasn't already).

You can manually pass in '-d 0' to force removal of datasets recently
marked as deleted.

The purge scripts do not check 'allow_user_dataset_purge', of course.

On Tue, Mar 18, 2014 at 11:50 AM, Carl Eberhard <>wrote:

> I believe it's a (BAD) silent failure mode in the server code.
> If I understand correctly, the purge request isn't coughing an error when
> it gets to the 'allow_user_dataset_purge' check and instead is silently
> marking (or re-marking) the datasets as deleted.
> I would rather it fail with a 403 error if purge is explicitly requested.
> That said, it of course would be better to remove the purge operation
> based on the configuration then to show an error after we've found you
> can't do the operation. The same holds true for the 'permanently remove
> this dataset' link in deleted datasets.
> I'll see if I can find out the answer to your question on the cleanup
> scripts.
> On Tue, Mar 18, 2014 at 10:49 AM, Peter Cock <>wrote:
>> On Tue, Mar 18, 2014 at 2:14 PM, Carl Eberhard <>
>> wrote:
>> > Thanks, Ravi & Peter
>> >
>> > I've added a card to get the allow_user_dataset_purge options into the
>> > client and to better show the viable options to the user:
>> >
>> Thanks Carl - so this was a user interface bug, showing the user
>> non-functional permanent delete (purge) options. That's clearer now.
>> In this situation can the user just 'delete', and wait N days for
>> the cleanup scripts to actually purge the files and free the space?
>> (It seems N=10 in scripts/cleanup/purge_*.sh at least, elsewhere
>> like the underlying Python script the default looks like N=60).
>> Regards,
>> Peter
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

To search Galaxy mailing lists use the unified search at:

Reply via email to