Editing the DB and manually fixing all the errors in the UI doesn't seem
like a long term solution.
Is there a way to do this through the API? If not, where should I start
working to add it in?
On Wed, Jan 2, 2013 at 7:41 AM, Nate Coraor <n...@bx.psu.edu> wrote:
> On Dec 31, 2012, at 8:18 PM, Kyle Ellrott wrote:
> > I'm currently adding a large number of files into my Galaxy instance's
> dataset library. During the import some of the files (a small percentage)
> failed with:
> > /inside/depot4/galaxy/set_metadata.sh: line 4: 14790 Segmentation fault
> (core dumped) python ./scripts/set_metadata.py $@
> > I think it's probably standard cluster shenanigans, and may work just
> fine if run again. But there doesn't seem to be a way retry. Is there a way
> to deal with this that is easier than manually deleting and re-uploading
> the offending files?
> Hi Kyle,
> Unfortunately, there's not going to be a way to do this entirely in the
> UI. Your best shot is to change the state of the datasets in the database
> from 'error' to 'ok' and then try using the metadata auto-detect button in
> the UI.
> > Kyle
> > ___________________________________________________________
> > Please keep all replies on the list by using "reply all"
> > in your mail client. To manage your subscriptions to this
> > and other Galaxy lists, please use the interface at:
> > http://lists.bx.psu.edu/
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at: