Re: [galaxy-dev] trackster is not working on the vrelease_2014.02.10--2--29ce93a13ac7
Hi Jeremy, After checking, the two js scripts are absent from the release: backbone-relational.js ( static/scripts/packed/libs/backbone/ ) galaxy.utils.js ( static/scripts/packed/utils/ ) bw C On 25 Mar 2014, at 16:16, Shu-Yi Su wrote: Hi Jeremy, Thank you very much for the reply. Yes, we are running on galaxy-dist, and manually pulled to update our installation. The release version is vrelease_2014.02.10--2--29ce93a13ac7. I have tried safari and firefox. Both are not working. Here is the error massage from console: [Error] Failed to load resource: the server responded with a status of 404 (Not Found) (backbone-relational.js, line 0) [Error] Failed to load resource: the server responded with a status of 404 (Not Found) (galaxy.utils.js, line 0) [Error] Error: Script error for: libs/backbone/backbone-relational http://requirejs.org/docs/errors.html#scripterror defaultOnError (require.js, line 1) onError (require.js, line 1) onScriptError (require.js, line 1) [Error] Error: Script error for: utils/galaxy.utils http://requirejs.org/docs/errors.html#scripterror defaultOnError (require.js, line 1) onError (require.js, line 1) onScriptError (require.js, line 1);;; We are also wondering if there is anything we didn't set up probably for our universe_wsgi.ini file. Thank you. Best, Shu-Yi On Mar 25, 2014, at 4:05 PM, Jeremy Goecks wrote: Providing some additional information will help diagnose the problem: *are you running galaxy-dist? If so, have you manually pulled and applied commits from galaxy-central? If so, which ones? *which Web browser are you using? *can you please open the JavaScript console in your browser and provide any errors that you see? Thanks, J. -- Jeremy Goecks Assistant Professor of Computational Biology George Washington University On Mar 24, 2014, at 11:38 AM, Shu-Yi Su shu-yi...@embl.de wrote: Hi all, We have recently updated our local Galaxy installation to vrelease_2014.02.10--2--29ce93a13ac7 (database version is 118). I found that the Trackster is not working. I have checked the latest commits related to Trackster bugs. So i have updated theses files: ./static/scripts/viz/trackster.js (commits date: 2014-02-28) ./static/scripts/viz/trackster_ui.js (commits date: 2014-02-28) ./static/scripts/viz/trackster/tracks.js (commits date: 2014-03-16) ./static/scripts/utils/utils.js (commits date: 2014-03-19) ./static/scripts/utils/config.js (commits date: 2014-03-15) But, it is still not working. I have tried different format…bam, bed, sam….all are not working. I looked into all possible files I can think about that might cause the problems but still don't have any clues. I also looked into the log, and there is no error, but i noticed that there is difference from the log where the trackster was working in previous installation. The log where the trackster is not working: 1. GET /galaxy-dev/visualization/trackster?dataset_id=f04ecb1d3d259245hda_ldda=hda The log where the trackster is working (from the old installation): 1. GET /galaxy/visualization/trackster?dataset_id=5d38b380cba2f4a6hda_ldda=hda…… 2. GET /galaxy/api/datasets/5d38b380cba2f4a6?hda_ldda=hdadata_type=converted_datasets_state…. 3. GET /galaxy/api/datasets/5d38b380cba2f4a6?data_type=datachrom=chr2L….. It looks like that the trackster in new version of Galaxy does not execute the second and the third steps. Here is the screen shot when i clicked View in new visualization (there is nothing shown up---blank) Any ideas or suggestions are appreciated. Thank you very much. Best, Shu-Yi PastedGraphic-1.tiff ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your
[galaxy-dev] Dataset's extra files
Hi all, We have a local tool which role is to transfer (ie copy) a dataset file to a directory on our NFS. This is extremely convenient as it can be included within workflows and therefore save the time of clicking download button (we also have configurable renaming/compression as part of it). It is heavily used by our users. The problem is with datasets that have associated files like FASTQC as these extra files are simple not ignored... We'd like to improve our 'NFS_transfer' tool so it can deal with this in a similar fashion as the download button. Foreseen solution : * Check if a directory named 'dataset_id_files' exists within the dataset store * if so, 'cp -r' it into a tmp dir, cp the dataset itself into same tmp dir (with renaming on the fly) * zip/tar.gz the tmp dir * copy it to final NFS location Question is : is this the right way to do it ? As a non python specialist, it is a little tricky to find the right way to it (I can t locate the piece of code that does this in galaxy ie behind the download button). In particular, can I get the list of extra files using the '$galaxyFile' object given in the tool by : param type=data name=galaxyFile label=File to transfer/ i.e. in the same way we get the dataset name or file extension ($galaxyFile.dataset.name and $galaxyFile.ext) ? Any advise on how best to implement this, in a portable way, very appreciated. Thanks for your time, Charles = Charles Girardot Head of Genome Biology Computational Support (GBCS) European Molecular Biology Laboratory Tel: +49 6221 387 -8585 Fax: +49-(0)6221-387-8166 Email: charles.girar...@embl.de Room V205 Meyerhofstraße 1, 69117 Heidelberg, Germany = ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] Cluster jobs running as real user
Hi Shane, We had enabled this feature on our cluster working with PBS but since we switched to LSF, we never managed to have it working again. Same error as yours if I correctly remember. After hours spent looking into LSF, I think we understood that LSF somehow catches the user change and rejects the job. We finally gave up. I am not sure this helps ... bw Charles On 5 Mar 2014, at 02:55, Shane Sturrock wrote: Has anyone had any success in getting cluster jobs running as the real user? I've got our server happily working with LSF via drmaa but all jobs run as the galaxy user and I would like to be able to separate out the jobs on a per user basis. I've followed the instructions at https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#Submitting_Jobs_as_the_Real_User but I'm having no luck with it so was wondering if someone would know how I might debug it, like where could I find the script it is trying to execute? All I'm getting is code 18: invalid LSF job id although it does seem to be creating files in my databases/tmp directory owned by the user. Shane -- Dr Shane Sturrock shane.sturr...@biomatters.com Senior Scientist Tel: +64 (0) 9 379 5064 76 Anzac Ave Auckland New Zealand ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/ = Charles Girardot Head of Genome Biology Computational Support (GBCS) European Molecular Biology Laboratory Tel: +49 6221 387 -8585 Fax: +49-(0)6221-387-8166 Email: charles.girar...@embl.de Room V205 Meyerhofstraße 1, 69117 Heidelberg, Germany = ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] Renaming in workflow - bis
Dear all, I am looking into options for renaming output files automatically in workflows and I found this thread : http://dev.list.galaxyproject.org/Renaming-in-Workflows-td4656426.html#a4656430 I managed to use the following in test workflow: - ${} to create input variable to the workflow - #{} to reuse the tool input(s) In the post indicated earlier, I read that one can even manipulate input string with basename, upper and lower e.g. #{ input | basename} All this is really awesome and my questions are : - are there other ways to fetch existing e.g. env variables than ${} or #{} - what other variables can be fetched e.g. maybe this 'on_string' ? - are there string functions other than basename, upper and lower that can be used ? Is there a list? Thx a lot Charles ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Re: [galaxy-dev] Can't view file_name in histories via API unless admin?
Hi Neil, sorry, this is not an answer to your post, I hope you won t mind me stepping in your thread this way. Your message kept my attention because of your note: I am surprised by the error message you report when trying to use an admin API key. How does galaxy know the user who is making the call? Sorry if I am missing the obvious bw Charles On 17 Jan 2014, at 07:35, neil.burd...@csiro.au wrote: Hi, it seems that the entry file_name: does not appear when running the command /home/galaxy/milxcloud/scripts/api/display.py api_key http://barium-rbh:9100/extras/api/histories/ebfb8f50c6abde6d/contents/4a56addbcc836c23 unless you are stated as as admin user in the universe_wsgi.ini i.e. admin_users = t...@test.com,te...@test.com is this known? Is there anyway to get around this as we don't want all users to be admin, however, they need access to this field. Note that you can't use an admin's api_key as you'll get the error Error in history API at listing dataset: History is not owned by the current user Thanks Neil ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/ ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
[galaxy-dev] running parallel workflow fails
' wikidb_ngs_UserID='15' wikidb_ngs_UserName='Girardot' wikidb_ngs__session='d9n3qq68n0nl6p6j9t96k5q6m3', 'galaxysession=7df64c2bf0628c5b57afd358adee6e7b2cd6161345bb6b47388e497cbd26f407876c2bed128d9226;wikidb_ngs_Token=1f38fe6b5eb5863596cad189f608835c; wikidb_ngs_UserName=Girardot; wikidb_ngs_UserID=15; wikidb_ngs__session=d9n3qq68n0nl6p6j9t96k5q6m3') paste.expected_exceptions [class 'paste.httpexceptions.HTTPException'] paste.httpexceptionspaste.httpexceptions.HTTPExceptionHandler object at 0x22928b10 paste.httpserver.thread_poolpaste.httpserver.ThreadPool object at 0x1ddf9950 paste.printdebug_listeners [cStringIO.StringO object at 0x28c57618, paste.script.serve.LazyWriter object at 0x1dd5a6d0] paste.recursive.forward paste.recursive.Forwarder from /galaxy paste.recursive.include paste.recursive.Includer from /galaxy paste.recursive.include_app_iterpaste.recursive.IncluderAppIter from /galaxy paste.recursive.script_name '/galaxy' paste.remove_printdebug function remove_printdebug at 0x2c032aa0 paste.throw_errors True webob._parsed_query_vars(MultiDict([]), '') wsgi process'Multithreaded' = Charles Girardot European Molecular Biology Laboratory GBCS / Furlong Group Tel: +49 6221 387 -8585(V205) or -8433 (V320) Fax: +49-(0)6221-387-8166 Email: charles.girar...@embl.de Room V205 (GBCS)/V320 (Furlong Group) Meyerhofstraße 1, 69117 Heidelberg, Germany = ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
Re: [galaxy-dev] running parallel workflow fails
Hi, Posted this too quick: just realized that the jobs are actually created although I am getting this error. bw C On 16 Jan 2013, at 13:55, Charles Girardot wrote: Hi, I have a workflow that I can successfully launch on a single FASTQ (the only input of the workflow). I am now trying to start the workflow on 8 different FASTQ files (after demultiplexing) but systematically get this error (local galaxy install): Thx for your help Charles URL: http://manni/galaxy/workflow/run Module paste.exceptions.errormiddleware:143 in __call__ app_iter = self.application(environ, start_response) Module paste.debug.prints:98 in __call__ environ, self.app) Module paste.wsgilib:539 in intercept_output app_iter = application(environ, replacement_start_response) Module paste.recursive:80 in __call__ return self.application(environ, start_response) Module galaxy.web.framework.middleware.remoteuser:91 in __call__ return self.app( environ, start_response ) Module paste.httpexceptions:632 in __call__ return self.application(environ, start_response) Module galaxy.web.framework.base:160 in __call__ body = method( trans, **kwargs ) TypeError: run() takes at least 3 arguments (2 given) ## below is a copy-paste of all variables (if useful): CONTENT_LENGTH'0' HTTP_ACCEPT 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' HTTP_ACCEPT_ENCODING 'gzip, deflate' HTTP_ACCEPT_LANGUAGE 'en-us' HTTP_AUTHORIZATION'Basic Z2lyYXJkb3Q6Y3QwMmhlODg=' HTTP_CONNECTION 'Keep-Alive' HTTP_COOKIE 'galaxysession=7df64c2bf0628c5b57afd358adee6e7b2cd6161345bb6b47388e497cbd26f407876c2bed128d9226;wikidb_ngs_Token=1f38fe6b5eb5863596cad189f608835c; wikidb_ngs_UserName=Girardot; wikidb_ngs_UserID=15; wikidb_ngs__session=d9n3qq68n0nl6p6j9t96k5q6m3' HTTP_HOST 'manni' HTTP_REFERER 'http://manni/galaxy/workflow/run?id=52d6640a458dfabe' HTTP_REMOTE_USER 'girar...@embl.de' HTTP_USER_AGENT 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.57.2 (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2' PATH_INFO '/workflow/run' REMOTE_ADDR '10.11.72.108' REQUEST_METHOD'GET' SCRIPT_NAME '/galaxy' SERVER_NAME 'manni' SERVER_PORT '8080' SERVER_PROTOCOL 'HTTP/1.1' Configuration __file__ '/g/funcgen/galaxy/universe_wsgi.combined.ini' admin_users 's...@embl.de,sa...@embl.de,girar...@embl.de,gal...@embl.de' allow_library_path_paste 'True' allow_user_dataset_purge 'True' allow_user_impersonation 'True' api_allow_run_as 'gal...@embl.de' cleanup_job 'always' cluster_files_directory '/g/galaxy/galaxy_data/pbs' collect_outputs_from 'new_file_path,job_working_directory' cookie_path '/galaxy' database_connection 'postgres://galaxy:galaxy@manni/galaxy_db' database_engine_option_max_overflow '20' database_engine_option_pool_size '10' database_engine_option_server_side_cursors'True' database_engine_option_strategy 'threadlocal' debug 'True' default_cluster_job_runner'drmaa://-q gbcs_q -N galaxy_stdjob -l ncpus=1,mem=4gb/' drmaa_external_killjob_script 'scripts/drmaa_external_killer.py' drmaa_external_runjob_script 'scripts/drmaa_external_runner.py' enable_api'True' enable_job_recovery 'True' enable_quotas 'True' enable_tracks 'False' environment_setup_file'/g/funcgen/galaxy/env_setup' error_email_to'g...@embl.de' external_chown_script 'scripts/external_chown_script.py' file_path '/g/galaxy/galaxy_data/files' here '/g/funcgen/galaxy' id_secret 'all_your_base_are_belong_to_us' job_handlers 'runner0' job_manager 'runner0' job_working_directory '/g/galaxy/galaxy_data/job_working_directory' library_import_dir'/home/galaxy/' log_level 'DEBUG' logo_url 'http://manni/ngswiki/index.php/Galaxy' new_file_path '/g/galaxy/galaxy_data/tmp' nglims_config_file'tool-data/nglims.yaml' outputs_to_working_directory 'True' qa_url'http://slyfox.bx.psu.edu:8080/' remote_user_maildomain'embl.de' require_login 'True' retry_job_output_collection '10' sanitize_all_html 'False' set_metadata_externally 'True' smtp_server 'smtp.embl.de' start_job_runners 'drmaa' static_cache_time '360' static_dir'/g/funcgen/galaxy/static/' static_enabled'True' static_favicon_dir'/g/funcgen/galaxy/static/favicon.ico' static_images_dir '/g/funcgen/galaxy/static/images' static_robots_txt '/g/funcgen/galaxy/static/robots.txt' static_scripts_dir'/g/funcgen/galaxy/static/scripts/' static_style_dir '/g/funcgen/galaxy/static/june_2007_style/blue' tool_config_file 'tool_conf.xml,shed_tool_conf.xml' tool_dependency_dir '/g/funcgen/galaxy/dependencies' tool_path 'tools' track_jobs_in_database'True' use_interactive 'False' use_nglims'False' use_remote_user 'True' WSGI Variables application
[galaxy-dev] Problem transitioning to LSF 7 Update 6
Hi all, We are currently changing how cluster management from PBSPro to LSF (LSF 7 Update 6). We have a running Galaxy using drmaa with PBSPro (with the job are submitted as real users option). We expected an easy transition to LSF i.e. simply changing the drmaa implementation but of course, life is not that simple. So basically it is not working. We have tried with drmaa 1.0.4 and 1.0.3 (downloaded from http://sourceforge.net/projects/lsf-drmaa/ ). Before getting to the symptoms: does anybody successfully run Galaxy with drmaa and LSF 7 Update 6 ? Now the symptoms: - first we had an error saying something like queued as Job 5160 is submitted to default queue medium_priority is not an idea - we traced this in the drmaa C code and added a regex to actually extract the job id (if you are successfully running Galaxy with drmaa and LSF 7 Update 6; did you also have to do this??); but then a new error came: - jobs are successfully sent to the LSF queue and submitted to a node - after few ms we get an error : galaxy.jobs.runners.drmaa DEBUG 2012-12-17 11:14:29,227 (1699) submitting with credentials: sauer [uid: 8483] galaxy.jobs.runners.drmaa DEBUG 2012-12-17 11:14:29,229 (1699) Job script for external submission is: /g/galaxy/galaxy-dev_data/pbs/1699.jt_json galaxy.jobs.runners.drmaa INFO 2012-12-17 11:14:29,464 (1699) queued as Job 5160 is submitted to default queue medium_priority. E #2bae [ 0.00] * call to lsb_openjobinfo returned with error 1:No matching job found mapped to 1040:Job does not exist in DRMs queue. galaxy.jobs.runners.drmaa DEBUG 2012-12-17 11:14:30,275 (1699/Job 5160 is submitted to default queue medium_priority. 5160) job left DRM queue with following message: code 18: lsb_openjobinfo: XDR operation error We are lost and the PBSPro license runs out on January 1 so we badly need to fix this... PS: Note that if we simply switch back to PBSPro, it is all working fine; which tells us that the Galaxy setup is ok. Thx for your help bw Charles = Charles Girardot European Molecular Biology Laboratory E. Furlong Group http://furlonglab.embl.de Tel: +49 6221 387 -8585 (V205) or 8433 (V320) Fax: +49-(0)6221-387-8166 Email: charles.girar...@embl.de Room V205/V320 Meyerhofstraße 1, 69117 Heidelberg, Germany = ___ Please keep all replies on the list by using reply all in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/