Dear Marius,

Many thank for your feedback.

I join to this email: the playbook, the pulsarservers.yml file and the log of 
pulsar playbook.

CLI plugin is for us the best solution since we have nothing to maintain. DRMAA 
is not actively developed for slurm, correct ?


In the playbook, we use systemd which I think should restart pulsar but It 
might not be the case:

TASK [galaxyproject.pulsar : systemd daemon-reload and enable/start service] 
****************************************************************************
ok: [HPC]

RUNNING HANDLER [galaxyproject.pulsar : default restart pulsar handler] 
*********************************************************************************
skipping: [HPC]

Currently, we never used DRMAA. The job were executed immediately on the 
cluster with CLI or DRMAA. We had this part in pulsarservers.yml, to activate 
CLI:
  managers:
    _default_:
      type: queued_cli
      job_plugin: slurm
      native_specification: "-p batch --tasks=1 --cpus-per-task=2 
--mem-per-cpu=1000 -t 10:00"
      min_polling_interval: 0.5
      amqp_publish_retry: True
      amqp_publish_retry_max_retries: 5
      amqp_publish_retry_interval_start: 10
      amqp_publish_retry_interval_step: 10
      amqp_publish_retry_interval_max: 60


Thanks for your help,
Luc


------------
Luc Cornet, PhD
Bio-informatician 
Mycology and Aerobiology
Sciensano

----- Mail original -----
De: "Marius van den Beek" <m.vandenb...@gmail.com>
À: "Luc Cornet" <luc.cor...@uliege.be>
Cc: "HelpGalaxy" <galaxy-dev@lists.galaxyproject.org>, "Baurain Denis" 
<denis.baur...@uliege.be>, "Pierre Becker" <pierre.bec...@sciensano.be>, 
"Colignon David" <david.colig...@uliege.be>
Envoyé: Mercredi 30 Juin 2021 16:02:04
Objet: [galaxy-dev] Re: Galaxy install problems

Hi Luc, 

I'm sorry to hear that you're struggling to set up Galaxy to your liking. 
Let me start by pointing out that [ http://usegalaxy.org/ | usegalaxy.org ] 
uses slurm with DRMAA, this is certainly going to be more performant and 
reliable than the CLI plugin. 
There is little maintenance necessary, so maybe that is why activity on 
slurm-drmaa is low (See also [ https://github.com/natefoo/slurm-drmaa | 
https://github.com/natefoo/slurm-drmaa ] ). 
I would be curious to know how you came to the conclusion that there is some 
incompatibility between DRMAA and slurm 
Note that one of the setups we teach during the training submits via DRMAA to 
slurm. 

Then I'd like to point out that there are a huge variety of different ways in 
which you can configure Galaxy and the job submission. 
We teach the most common ones during the training week, with the aim that you 
understand how these things work together, 
as well as giving you a handle on how you can manage these different settings 
and services using a configuration management system. 
We cannot tailor a solution to your infrastructure during this week. 

About your problem specifically, I had asked this on gitter before: 

> Did you restart pulsar after rolling out the new config ? 

to which you've answered that you re-ran the playbook, but that's not a 
sufficient answer. 

Every playbook is different, and we cannot know if this includes a restarter 
service for pulsar. 
Also please don't assume that everyone that could potentially help you knows 
ansible and the playbooks that are being taught intimately, 
and in what ways you have customized your playbook. 
It is much more helpful to write up the relevant settings you've changed and 
the logs that go with it. 

You've also been asked to provide logs of the restart, which as far as I can 
tell you haven't provided. 
You had mentioned on gitter that pulsar continues to use DRMAA to submit jobs, 
so you'll 
want to double check whether you've really restarted pulsar after the config 
changes, 
and look at the startup logs for pulsar, and find out how it is possible for 
pulsar to submit jobs 
via drmaa if it is not set up to do so. 

Best, 
Marius 




___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  %(web_page_url)s

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/

Attachment: pulsarservers.yml
Description: application/yaml

Attachment: pulsar.yml
Description: application/yaml

PLAY [pulsarservers] ************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************
ok: [HPC]

TASK [Install some packages] ****************************************************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Set OS-specific variables] **************************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Set facts for Galaxy CVMFS config repository, if enabled] *******************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Set facts for Galaxy CVMFS static repositories, if enabled] *****************************************************************
skipping: [HPC]

TASK [galaxyproject.cvmfs : Set facts for CVMFS config repository, if enabled] **************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : include_tasks] **************************************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.cvmfs/tasks/client.yml for HPC

TASK [galaxyproject.cvmfs : Include initial OS-specific tasks] ******************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.cvmfs/tasks/init_redhat.yml for HPC

TASK [galaxyproject.cvmfs : Configure CernVM yum repositories] ******************************************************************************************
ok: [HPC] => (item={'name': 'cernvm', 'description': 'CernVM packages', 'baseurl': 'http://cvmrepo.web.cern.ch/cvmrepo/yum/cvmfs/EL/$releasever/$basearch/'})
ok: [HPC] => (item={'name': 'cernvm-config', 'description': 'CernVM-FS extra config packages', 'baseurl': 'http://cvmrepo.web.cern.ch/cvmrepo/yum/cvmfs-config/EL/$releasever/$basearch/'})

TASK [galaxyproject.cvmfs : Install CernVM yum key] *****************************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Install CernVM-FS packages and dependencies (yum)] **************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Include key setup tasks] ****************************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.cvmfs/tasks/keys.yml for HPC

TASK [galaxyproject.cvmfs : Make CernVM-FS key directories] *********************************************************************************************
ok: [HPC] => (item=/etc/cvmfs/keys/galaxyproject.org/cvmfs-config.galaxyproject.org.pub)

TASK [galaxyproject.cvmfs : Install CernVM-FS keys] *****************************************************************************************************
ok: [HPC] => (item=/etc/cvmfs/keys/galaxyproject.org/cvmfs-config.galaxyproject.org.pub)

TASK [galaxyproject.cvmfs : Check CernVM-FS for setup] **************************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Ensure AutoFS is enabled + running] *****************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Perform AutoFS and FUSE configuration for CernVM-FS] ************************************************************************
skipping: [HPC]

TASK [galaxyproject.cvmfs : Create config repo config] **************************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Set config repo defaults] ***************************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Remove domain configuration] ************************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Configure CernVM-FS domain] *************************************************************************************************

TASK [galaxyproject.cvmfs : Configure CernVM-FS global client settings] *********************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Include repository client options tasks] ************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.cvmfs/tasks/options.yml for HPC

TASK [galaxyproject.cvmfs : Set repository client options] **********************************************************************************************

TASK [galaxyproject.cvmfs : Install cvmfs_wipecache setuid binary] **************************************************************************************
skipping: [HPC]

TASK [galaxyproject.cvmfs : Remove cvmfs_wipecache setuid binary] ***************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Install cvmfs_remount_sync setuid binary] ***********************************************************************************
skipping: [HPC]

TASK [galaxyproject.cvmfs : Remove cvmfs_remount_sync setuid binary] ************************************************************************************
ok: [HPC]

TASK [galaxyproject.cvmfs : Download cvmfs_preload utility when desired] ********************************************************************************
skipping: [HPC]

TASK [galaxyproject.cvmfs : include_tasks] **************************************************************************************************************
skipping: [HPC]

TASK [galaxyproject.cvmfs : include_tasks] **************************************************************************************************************
skipping: [HPC]

TASK [galaxyproject.cvmfs : include_tasks] **************************************************************************************************************
skipping: [HPC]

TASK [galaxyproject.pulsar : Set privilege separation default variables] ********************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Include user creation tasks] ***********************************************************************************************
skipping: [HPC]

TASK [galaxyproject.pulsar : Include path management tasks] *********************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.pulsar/tasks/paths.yml for HPC

TASK [galaxyproject.pulsar : Get group IDs for pulsar users] ********************************************************************************************
ok: [HPC] => (item=pulsar)
ok: [HPC] => (item=root)

TASK [galaxyproject.pulsar : Get group names for pulsar users] ******************************************************************************************
ok: [HPC] => (item=pulsar)
ok: [HPC] => (item=root)

TASK [galaxyproject.pulsar : Set pulsar user facts] *****************************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Determine whether to restrict to group permissions] ************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Create pulsar_root] ********************************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Create Pulsar privilege separation user-owned directories] *****************************************************************
ok: [HPC] => (item=/opt/pulsar/config)

TASK [galaxyproject.pulsar : Create Pulsar user-owned directories] **************************************************************************************
ok: [HPC] => (item=/opt/pulsar/deps)
ok: [HPC] => (item=/opt/pulsar/files/persisted_data)
ok: [HPC] => (item=/opt/pulsar/files/staging)

TASK [galaxyproject.pulsar : Include virtualenv setup tasks] ********************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.pulsar/tasks/virtualenv.yml for HPC

TASK [galaxyproject.pulsar : Create Pulsar virtualenv] **************************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Ensure pip is the latest release] ******************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Install Pulsar] ************************************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Install Pulsar dependencies (optional)] ************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Include config files tasks] ************************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.pulsar/tasks/configure.yml for HPC

TASK [galaxyproject.pulsar : Create Pulsar config dir] **************************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : Create Pulsar app configuration file] **************************************************************************************
changed: [HPC]

TASK [galaxyproject.pulsar : Create Pulsar job manager configuration file] ******************************************************************************
skipping: [HPC]

TASK [galaxyproject.pulsar : Create additional Pulsar config files] *************************************************************************************
ok: [HPC] => (item=server.ini)
ok: [HPC] => (item=local_env.sh)
ok: [HPC] => (item=job_metrics_conf.xml)
ok: [HPC] => (item=dependency_resolvers_conf.xml)

TASK [galaxyproject.pulsar : include_tasks] *************************************************************************************************************
included: /home/galaxyluc/galaxy/roles/galaxyproject.pulsar/tasks/systemd.yml for HPC

TASK [galaxyproject.pulsar : Create Pulsar systemd unit file] *******************************************************************************************
ok: [HPC]

TASK [galaxyproject.pulsar : systemd daemon-reload and enable/start service] ****************************************************************************
ok: [HPC]

RUNNING HANDLER [galaxyproject.pulsar : default restart pulsar handler] *********************************************************************************
skipping: [HPC]

PLAY RECAP **********************************************************************************************************************************************
HPC             : ok=43   changed=1    unreachable=0    failed=0    skipped=13   rescued=0    ignored=0   

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  %(web_page_url)s

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/

Reply via email to