Slurm version 17.02.5 contains 18 bug fixes developed over the past
month.

Slurm version 17.11.0-pre1 is the first pre-release of version 17.11, to
be
released in November 2017. This version contains the support for
scheduling of
a workload across a set (federation) of clusters which is described in
some 
detail here:
https://slurm.schedmd.com/SLUG16/FederatedScheduling.pdf

Details about the changes in each version  are listed below.

Slurm downloads are available from http://www.schedmd.com/#repos

* Changes in Slurm 17.02.5
==========================
 -- Prevent segfault if a job was blocked from running by a QOS that is
 then
    deleted.
 -- Improve selection of jobs to preempt when there are multiple
 partitions
    with jobs subject to preemption.
 -- Only set kmem limit when ConstrainKmemSpace=yes is set in
 cgroup.conf.
 -- Fix bug in task/affinity that could result in slurmd fatal error.
 -- Increase number of jobs that are tracked in the slurmd as finishing
 at one
    time.
 -- Note when a job finishes in the slurmd to avoid a race when
 launching a
    batch job takes longer than it takes to finish.
 -- Improve slurmd startup on large systems (> 10000 nodes)
 -- Add LaunchParameters option of cray_net_exclusive to control whether
 all
    jobs on the cluster have exclusive access to their assigned nodes.
 -- Make sure srun inside an allocation gets --ntasks-per-[core|socket]
    set correctly.
 -- Only make the extern step at job creation.
 -- Fix for job step task layout with --cpus-per-task option.
 -- Fix --ntasks-per-core option/environment variable parsing to set
    the requested value, instead of always setting one (srun).
 -- Correct error message when ClusterName in configuration files does
 not match
    the name in the slurmctld daemon's state save file.
 -- Better checking when a job is finishing to avoid underflow on job's
    submitted to a QOS/association.
 -- Handle partition QOS submit limits correctly when a job is submitted
 to
    more than 1 partition or when the partition is changed with
    scontrol.
 -- Performance boost for when Slurm is dealing with credentials.
 -- Fix race condition which could leave a stepd hung on shutdown.
 -- Add lua support for opensuse.

* Changes in Slurm 17.11.0pre1
==============================
 -- Interpet all format options in output/error file to log prolog
 errors. Prior
    logic only supported "%j" (job ID) option.
 -- Add the configure option --with-shared-libslurm which will link to
    libslurm.so instead of libslurm.o thus reducing the footprint of all
    the
    binaries.
 -- In switch plugin, added plugin_id symbol to plugins and wrapped
    switch_jobinfo_t with dynamic_plugin_data_t in interface calls in
    order to pass switch information between clusters with different
    switch
    types.
 -- Switch naming of acct_gather_infiniband to acct_gather_interconnect
 -- Make it so you can "stack" the interconnect plugins.
 -- Add a last_sched_eval timestamp to record when a job was last
 evaluated
    by the main scheduler or backfill.
 -- Add scancel "--hurry" option to avoid staging out any burst buffer
 data.
 -- Simplify the sched plugin interface.
 -- Add new advanced reservation flags of "weekday" (repeat on each
 weekday;
    Monday through Friday) and "weekend" (repeat on each weekend day;
    Saturday
    and Sunday).
 -- Add new advanced reservation flag of "flex", which permits jobs
 requesting
    the reservation to begin prior to the reservation's start time and
    use
    resources inside or outside of the reservation. A typical use case
    is to
    prevent jobs not explicitly requesting the reservation from using
    those
    reserved resources rather than forcing jobs requesting the
    reservation to
    use those resources in the time frame reserved.
 -- Add NoDecay flag to QOS.
 -- Node "OS" field expanded from "sysname" to "sysname release version"
 (e.g.
    change from "Linux" to
    "Linux 4.8.0-28-generic #28-Ubuntu SMP Sat Feb 8 09:15:00 UTC
    2017").
 -- jobcomp/elasticsearch - Add "job_name" and "wc_key" fields to stored
    information.
 -- jobcomp/filetxt - Add ArrayJobId, ArrayTaskId, ReservationName,
 Gres,
    Account, QOS, WcKey, Cluster, SubmitTime, EligibleTime,
    DerivedExitCode and
    ExitCode.
 -- scontrol modified to report core IDs for reservation containing
 individual
    cores.
 -- MYSQL - Get rid of table join during rollup which speeds up the
 process
    dramatically on large job/step tables.
 -- Add ability to define features on clusters for directing federated
 jobs to
    different clusters.
 -- Add new RPC to process multiple federation RPCs in a single
 communication.
 -- Modify slurm_load_jobs() function to load job information from all
 clusters
    in a federation.
 -- Add squeue --local and --sibling options to modify filtering of jobs
 on
    federated clusters.
 -- Add SchedulerParameters option of bf_max_job_user_part to specifiy
 the
    maximum number of jobs per user for any single partition. This
    differs from
    bf_max_job_user in that a separate counter is applied to each
    partition
    rather than having a single counter per user applied to all
    partitions.
 -- Modify backfill logic so that bf_max_job_user, bf_max_job_part and
    bf_max_job_user_part options can all be used independently of each
    other.
 -- Add sprio -p/--partition option to filter jobs by partition name.
 -- Add partition name to job priority factor response message.
 -- Add sprio --local and --sibling options for use in federation of
 clusters.
 -- Add sprio "%c" format to print cluster name in federation mode.
 -- Modify sinfo logic to provided unified view of all nodes and
 partitions
    in a federation, add --local option to only report local state
    information
    even in a cluster, print cluster name with "%V" format option, and
    optionally sort by cluster name.
 -- If a task in a parallel job fails and it was launched with the
    --kill-on-bad-exit option then terminate the remaining tasks using
    the
    SIGCONT, SIGTERM and SIGKILL signals rather than just sending
    SIGKILL.
 -- Include submit_time when doing the sort for job scheduling.
 -- Modify sacct to report all jobs in federation by default. Also add
 --local
    option.
 -- Modify sacct to accept "--cluster all" option (in addition to the
 old
    "--cluster -1", which is still accepted).
 -- Modify sreport to report all jobs in federation by default. Also add
 --local
    option.
 -- sched/backfill: Improve assoc_limit_stop configuration parameter
 support.
 -- KNL features: Always keep active and available features in the same
 order:
    first site-specific features, next MCDRAM modes, last NUMA modes.
 -- Changed default ProctrackType to cgroup.
 -- Add "cluster_name" field to node_info_t and partition_info_t data
 structure.
    It is filled in only when the cluster is part of a federation and
    SHOW_FEDERATION flag used.
 -- Functions slurm_load_node() slurm_load_partitions() modified to show
 all
    nodes/partitions in a federation when the SHOW_FEDERATION flag is
    used.
 -- Add federated views to sview.
 -- Add --federation option to sacct, scontrol, sinfo, sprio, squeue,
 sreport to
    show a federated view. Will show local view by default.
 -- Add FederationParameters=fed_display slurm.conf option to configure
 status
    commands to display a federated view by default if the cluster is a
    member
    of a federation.
 -- Log the down nodes whenever slurmctld restarts.
 -- Report that "CPUs" plus "Boards" in node configuration invalid only
 if the
    CPUs value is not equal to the total thread count.
 -- Extend the output of the seff utility to also include the job's
 wall-clock
    time.
 -- Add bf_max_time to SchedulerParameters.
 -- Add bf_max_job_assoc to SchedulerParameters.
 -- Add new SchedulerParameters option bf_window_linear to control the
 rate at
    which the backfill test window expands. This can be used on a system
    with
    a modest number of running jobs (hundreds of jobs) to help prevent
    expected
    start times of pending jobs to get pushed forward in time. On
    systems with
    large numbers of running jobs, performance of the backfill scheduler
    will
    suffer and fewer jobs will be evaluated.
 -- Improve scheduling logic with respect to license use and node
 reboots.
 -- CRAY - Alter algorithm to come up with the SLURM_ID_HASH.
 -- Implement federated scheduling and federated status outputs.

Reply via email to