I believe it's a (BAD) silent failure mode in the server code.

If I understand correctly, the purge request isn't coughing an error when
it gets to the 'allow_user_dataset_purge' check and instead is silently
marking (or re-marking) the datasets as deleted.

I would rather it fail with a 403 error if purge is explicitly requested.

That said, it of course would be better to remove the purge operation based
on the configuration then to show an error after we've found you can't do
the operation. The same holds true for the 'permanently remove this
dataset' link in deleted datasets.

I'll see if I can find out the answer to your question on the cleanup
scripts.


On Tue, Mar 18, 2014 at 10:49 AM, Peter Cock <p.j.a.c...@googlemail.com>wrote:

> On Tue, Mar 18, 2014 at 2:14 PM, Carl Eberhard <carlfeberh...@gmail.com>
> wrote:
> > Thanks, Ravi & Peter
> >
> > I've added a card to get the allow_user_dataset_purge options into the
> > client and to better show the viable options to the user:
> > https://trello.com/c/RCPZ9zMF
>
> Thanks Carl - so this was a user interface bug, showing the user
> non-functional permanent delete (purge) options. That's clearer now.
>
> In this situation can the user just 'delete', and wait N days for
> the cleanup scripts to actually purge the files and free the space?
> (It seems N=10 in scripts/cleanup/purge_*.sh at least, elsewhere
> like the underlying Python script the default looks like N=60).
>
> Regards,
>
> Peter
>
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to