[slurm-users] RES: multiple srun commands in the same SLURM script

2023-10-31 Thread Paulo Jose Braga Estrela
Hi,

I think that you have a syntax error in your bash script. The "&" means that 
you want to send a process to background not that you want to run many commands 
in parallel. To run commands in a serial fashion you should use cmd && cmd2, 
then the cmd2 will only be executed if the command 1 return 0 as exit code.

To run commands in parallel with srun you should set the number of tasks to 4, 
so srun will spawn 4 tasks of the same command. Take a look at the examples 
section in srun docs. (https://slurm.schedmd.com/srun.html)




PÚBLICA
-Mensagem original-
De: slurm-users  Em nome de Andrei 
Berceanu
Enviada em: terça-feira, 31 de outubro de 2023 07:51
Para: slurm-users@lists.schedmd.com
Assunto: [slurm-users] multiple srun commands in the same SLURM script

Here is my SLURM script:

#!/bin/bash

#SBATCH --job-name="gpu_test"
#SBATCH --output=gpu_test_%j.log   # Standard output and error log
#SBATCH --account=berceanu_a+

#SBATCH --partition=gpu
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=31200m   # Reserve 32 GB of RAM per core
#SBATCH --time=12:00:00# Max allowed job runtime
#SBATCH --gres=gpu:16   # Allocate four GPUs

export SLURM_EXACT=1

srun --mpi=pmi2 -n 1 --gpus-per-node 1 python gpu_test.py & srun --mpi=pmi2 -n 
1 --gpus-per-node 1 python gpu_test.py & srun --mpi=pmi2 -n 1 --gpus-per-node 1 
python gpu_test.py & srun --mpi=pmi2 -n 1 --gpus-per-node 1 python gpu_test.py &

wait

What I expect this to do is to run, in parallel, 4 independent copies of the 
gpu_test.py python script, using 4 out of the 16 GPUs on this node.

What it actually does is it only runs the script on a single GPU - it's as if 
the other 3 srun commands do nothing. Perhaps they do not see any available 
GPUs for some reason?

System info:

slurm 19.05.2

Linux 5.4.0-90-generic #101~18.04.1-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
gpu  up   infinite  1   idle thor

NodeName=thor Arch=x86_64 CoresPerSocket=24
   CPUAlloc=0 CPUTot=48 CPULoad=0.45
   AvailableFeatures=(null)
   ActiveFeatures=(null)
   Gres=gpu:16(S:0-1)
   NodeAddr=thor NodeHostName=thor
   OS=Linux 5.4.0-90-generic #101~18.04.1-Ubuntu SMP Fri Oct 22
09:25:04 UTC 2021
   RealMemory=1546812 AllocMem=0 FreeMem=1433049 Sockets=2 Boards=1
   State=IDLE ThreadsPerCore=1 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A
   Partitions=gpu
   BootTime=2023-08-09T14:58:01 SlurmdStartTime=2023-08-09T14:58:36
   CfgTRES=cpu=48,mem=1546812M,billing=48,gres/gpu=16
   AllocTRES=
   CapWatts=n/a
   CurrentWatts=0 AveWatts=0
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s

I can add any additional system info as required.

Thank you so much for taking the time to read this,

Regards,
Andrei

O emitente desta mensagem é responsável por seu conteúdo e endereçamento e deve 
observar as normas internas da Petrobras. Cabe ao destinatário assegurar que as 
informações e dados pessoais contidos neste correio eletrônico somente sejam 
utilizados com o grau de sigilo adequado e em conformidade com a legislação de 
proteção de dados e privacidade aplicável. A utilização das informações e dados 
pessoais contidos neste correio eletrônico em desconformidade com as normas 
aplicáveis acarretará a aplicação das sanções cabíveis.

The sender of this message is responsible for its content and address and must 
comply with Petrobras' internal rules. It is up to the recipient to ensure that 
the information and personal data contained in this email are only used with 
the appropriate degree of confidentiality and in compliance with applicable 
data protection and privacy legislation. The use of the information and 
personal data contained in this e-mail in violation of the applicable rules 
will result in the application of the applicable sanctions.

El remitente de este mensaje es responsable por su contenido y dirección y debe 
cumplir con las normas internas de Petrobras. Corresponde al destinatario 
asegurarse de que la información y los datos personales contenidos en este 
correo electrónico solo se utilicen con el grado adecuado de confidencialidad y 
de conformidad con la legislación aplicable en materia de privacidad y 
protección de datos. El uso de la información y datos personales contenidos en 
este correo electrónico en contravención de las normas aplicables dará lugar a 
la aplicación de las sanciones correspondientes.



[slurm-users] RES: RES: Change something in user's script using job_submit.lua plugin

2023-10-31 Thread Paulo Jose Braga Estrela
Yes, reading the sources I found that _update_job function in job_mgr.c is 
responsible for calling job_submit_plugin_modify function. After calling it, 
_update_job validates and apply the changes made by the plugin function to many 
job record fields but don’t touch the script field. So, for now we can't change 
a job script using job_submit plugin. I'll see if it was done intentionally to 
prevent something from going wrong or if it was just forgotten.

By the way, thanks for trying to help me, Ole!



PÚBLICA
-Mensagem original-
De: Ole Holm Nielsen 
Enviada em: sábado, 28 de outubro de 2023 03:48
Para: Slurm User Community List 
Cc: Paulo Jose Braga Estrela 
Assunto: Re: RES: [slurm-users] Change something in user's script using 
job_submit.lua plugin

Hi Paulo,

Maybe what you see is due to a bug then?  You might try to update Slurm to see 
if has been fixed.

You should not use the Slurm RPMs from EPEL - I think offering these RPMs was a 
mistake.

Anyway you ought to upgrade to the latest Slurm 23.02.6 since a serious 
security issue was fixed a couple of weeks ago.  Older Slurm versions are all 
affected!  Perhaps this Wiki guide can help you upgrade to the latest RPM: 
https://wiki.fysik.dtu.dk/Niflheim_system/Slurm_installation/

/Ole


On 27-10-2023 13:13, Paulo Jose Braga Estrela wrote:
> Yes, the script is running and changing other fields like comment, partition, 
> account is working fine. The only problem seems to be the script field of 
> job_rec. I'm using Slurm 20.11.9 from EPEL repository for RHEL 8. Thank you 
> for sharing your Wiki. I've accessed it before. It's really useful for HPC 
> engineers.
>
> Best regards,
>
>
> PÚBLICA
> -Mensagem original-
> De: slurm-users  Em nome de Ole
> Holm Nielsen Enviada em: sexta-feira, 27 de outubro de 2023 03:31
> Para: slurm-users@lists.schedmd.com
> Assunto: Re: [slurm-users] Change something in user's script using
> job_submit.lua plugin
>
> Hi Paulo,
>
> Which Slurm version do you have, and did you set this in slurm.conf:
> JobSubmitPlugins=lua ?
>
> Perhaps you may find some useful information in this Wiki page:
> https://wiki/
> .fysik.dtu.dk%2FNiflheim_system%2FSlurm_configuration%2F%23job-submit-
> plugins=05%7C01%7Cpaulo.estrela%40petrobras.com.br%7Cba06d0f43b16
> 4fdf6d8e08dbd781ce98%7C5b6f62419a574be48e501dfa72e79a57%7C0%7C0%7C6383
> 40724794687584%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2l
> uMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=lthsF5GAYkNM
> jcFHVzERVO0stoth%2F8XZCssW%2B6jLGBw%3D=0
>
> /Ole
>
>
> On 26-10-2023 19:07, Paulo Jose Braga Estrela wrote:
>> Is it possible to change something in user’s sbatch script by using a
>> job_submit plugin? To be more specific, using Lua job_submit plugin.
>>
>> I’m trying to do the following in job_submit.lua when a user changes
>> job’s partition to “cloud” partition, but the script got executed
>> without modification.
>>
>> function slurm_job_modify(job_desc, job_rec, part_list, modify_uid)
>>
>>   if job_desc.partition == "cloud" then
>>
>>   slurm.log_info("slurm_job_modify: Bursting job %u
>> from uid %u to the cloud...",job_rec.job_id,modify_uid)
>>
>>   script = job_rec.script
>>
>>   slurm.log_info("Script BEFORE change: %s",script)
>>
>>   -- changing user command to another command
>>
>>   script = string.gsub(script,"local command","cloud
>> command")
>>
>>   slurm.log_info("Script AFTER change %s",script)
>>
>>   -- The script variable is really changed
>>
>>   job_rec.script = script
>>
>>   slurm.log_info("Job RECORD SCRIPT
>> %s",job_rec.script)
>>
>>   -- The job record also got changed, but the
>> EXECUTED script isn’t changed at all. It runs without modification.
>>
>>   end
>>
>>   return slurm.SUCCESS
>>
>> end
>>
>> *PAULO ESTRELA*

O emitente desta mensagem é responsável por seu conteúdo e endereçamento e deve 
observar as normas internas da Petrobras. Cabe ao destinatário assegurar que as 
informações e dados pessoais contidos neste correio eletrônico somente sejam 
utilizados com o grau de sigilo adequado e em conformidade com a legislação de 
proteção de dados e privacidade aplicável. A utilização das informações e dados 
pessoais contidos neste correio eletrônico em desconformidade com as normas 
aplicáveis acarretará a aplicação das sanções cabíveis.

The sender of this message is responsible for its content and address and must 
comply with Petrobras' internal rules. It is up to the recipient to ensure that 
the information and personal data contained in this email are only used with 
the appropriate degree of confidentiality and in compliance with applicable 
data protection and privacy legislation. The use of the information and 
personal data contained in this e-mail in violation of the applicable rules 
will result in the application of 

[slurm-users] RES: How to delay the start of slurmd until Infiniband/OPA network is fully up?

2023-10-31 Thread Paulo Jose Braga Estrela
I think that you should use NetworkManager-wait-online.service In RHEL 8. Take 
a look at its man page. It only allows the system reach network-online after 
all network interfaces are online. So, if your OP interfaces are managed by 
Network Manager, you can use it.


PÚBLICA
-Mensagem original-
De: slurm-users  Em nome de Ole Holm 
Nielsen
Enviada em: terça-feira, 31 de outubro de 2023 07:00
Para: Slurm User Community List 
Assunto: Re: [slurm-users] How to delay the start of slurmd until 
Infiniband/OPA network is fully up?

Hi Jeffrey,

On 10/30/23 20:15, Jeffrey R. Lang wrote:
> The service is available in RHEL 8 via the EPEL package repository as 
> system-networkd, i.e. systemd-networkd.x86_64 
>   253.4-1.el8epel

Thanks for the info.  We can install the systemd-networkd RPM from the EPEL 
repo as you suggest.

I tried to understand the properties of systemd-networkd before implementing it 
in our compute nodes.  While there are lots of networkd man-pages, it's harder 
to find an overview of the actual properties of networkd.  This is what I found:

* Networkd is a service included in recent versions of Systemd.  It seems to be 
an alternative to NetworkManager.

* Red Hat has stated that systemd-networkd is NOT going to be implemented in 
RHEL 8 or 9.

* Comparing systemd-networkd and NetworkManager:
https://fedoracloud.readthedocs.io/en/latest/networkd.html

* Networkd is described in the Wikipedia article
https://en.wikipedia.org/wiki/Systemd

While networkd seems to be really nifty, I hesitate to replace NetworkManager 
by networkd on our EL8 and EL9 systems because this is an unsupported and only 
lightly tested setup, and it may require additional work to keep our systems 
up-to-date in the future.

It seems to me that Max Rutkowski's solution in
https://github.com/maxlxl/network.target_wait-for-interfaces is less intrusive 
than converting to systemd-networkd.

Best regards,
Ole


> -Original Message-
> From: slurm-users  On Behalf Of
> Ole Holm Nielsen
> Sent: Monday, October 30, 2023 1:56 PM
> To: slurm-users@lists.schedmd.com
> Subject: Re: [slurm-users] How to delay the start of slurmd until 
> Infiniband/OPA network is fully up?
>
> ◆ This message was sent from a non-UWYO address. Please exercise caution when 
> clicking links or opening attachments from external sources.
>
>
> Hi Jens,
>
> Thanks for your feedback:
>
> On 30-10-2023 15:52, Jens Elkner wrote:
>> Actually there is no need for such a script since
>> /lib/systemd/systemd-networkd-wait-online should be able to handle it.
>
> It seems that systemd-networkd exists in Fedora FC38 Linux, but not in
> RHEL 8 and clones, AFAICT.

O emitente desta mensagem é responsável por seu conteúdo e endereçamento e deve 
observar as normas internas da Petrobras. Cabe ao destinatário assegurar que as 
informações e dados pessoais contidos neste correio eletrônico somente sejam 
utilizados com o grau de sigilo adequado e em conformidade com a legislação de 
proteção de dados e privacidade aplicável. A utilização das informações e dados 
pessoais contidos neste correio eletrônico em desconformidade com as normas 
aplicáveis acarretará a aplicação das sanções cabíveis.

The sender of this message is responsible for its content and address and must 
comply with Petrobras' internal rules. It is up to the recipient to ensure that 
the information and personal data contained in this email are only used with 
the appropriate degree of confidentiality and in compliance with applicable 
data protection and privacy legislation. The use of the information and 
personal data contained in this e-mail in violation of the applicable rules 
will result in the application of the applicable sanctions.

El remitente de este mensaje es responsable por su contenido y dirección y debe 
cumplir con las normas internas de Petrobras. Corresponde al destinatario 
asegurarse de que la información y los datos personales contenidos en este 
correo electrónico solo se utilicen con el grado adecuado de confidencialidad y 
de conformidad con la legislación aplicable en materia de privacidad y 
protección de datos. El uso de la información y datos personales contenidos en 
este correo electrónico en contravención de las normas aplicables dará lugar a 
la aplicación de las sanciones correspondientes.



Re: [slurm-users] How to delay the start of slurmd until Infiniband/OPA network is fully up?

2023-10-31 Thread Jens Elkner
On Tue, Oct 31, 2023 at 10:59:56AM +0100, Ole Holm Nielsen wrote:
Hi Ole,
  
TLTR;: below systemd-networkd stuff, only.

> On 10/30/23 20:15, Jeffrey R. Lang wrote:
> > The service is available in RHEL 8 via the EPEL package repository as 
> > system-networkd, i.e. systemd-networkd.x86_64   
> > 253.4-1.el8epel
> 
> Thanks for the info.  We can install the systemd-networkd RPM from the EPEL
> repo as you suggest.

Strange, that it is not installed by default. We use Ubuntu, only. The
first LTS which includes it is Xenial (16.04) - released in April 2016.
Anyway, we have never installed any NetworkManager stuff (too unflexible,
unreliable, buggy - last eval ~5 years ago and ditched forever), even
before 16.04 as well on desktops I ditch[ed] it (IMHO just overhead).

> I tried to understand the properties of systemd-networkd before implementing
> it in our compute nodes.  While there are lots of networkd man-pages, it's
> harder to find an overview of the actual properties of networkd.  This is
> what I found:

Basically you just need for each interface a *.netdev and a *.network
file in /etc/systemd/network/.  Optionally symlink /etc/resolv.conf to
/run/systemd/resolve/resolv.conf.  If you want to rename your
interface[s] (e.g. we use ${hostname}${ifidx}), and parameter
'net.ifnames=0' gets passed to the kernel, you can use a *.link file to
accomplish this. That's it. See example 1 below.

Some distros have obscure bloatware to manage them (e.g. Ubuntu installs
per default 'netplan.io' aka another way of indirection), but we ditch
those packages immediately and manage them "manually" as needed.
 
> * Comparing systemd-networkd and NetworkManager:
> https://fedoracloud.readthedocs.io/en/latest/networkd.html

Pretty good - shows all you probably need. Actually within containers we
have just /etc/systemd/network/40-${hostname}0.network, because the
lxc.net.* config already describe, what *.link and *.netdev would do.
See example 2.
  
...
> While networkd seems to be really nifty, I hesitate to replace

Does/can do all we need w/o a lot of overhead.

> NetworkManager by networkd on our EL8 and EL9 systems because this is an
> unsupported and only lightly tested setup,

We use it ~5 years on all machines, ~7 years on most of our machines;
multihomed, containers, simple and complex (i.e. a lot of NICs, VLANs)
w/o any problems ... 

> and it may require additional
> work to keep our systems up-to-date in the future.

I doubt that. The /etc/systemd/network/*.{link,netdev,network} interface
seems to be pretty stable. Haven't seen/noticed any stuff, which got
removed so far.

> It seems to me that Max Rutkowski's solution in
> https://github.com/maxlxl/network.target_wait-for-interfaces is less
> intrusive than converting to systemd-networkd.

Depends on your setup/environment. But I guess soomer or later, you need
to get into touch with it anyway. So here some examples:

Example 1:
--
# /etc/systemd/network/10-mb0.link
# we rename usually eth0, the 1st NIC on the motherboard to mb0 using
# its PCI Address to identify it
[Match]
Path=pci-:00:19.0

[Link]
Name=mb0 
MACAddressPolicy=persistent


# /etc/systemd/network/25-phys-2-vlans+vnics.network
[Match]
Name=mb0

[Link]
ARP=false

[Network]
LinkLocalAddressing=no
LLMNR=false
IPv6AcceptRA=no
LLDP=true
MACVLAN=node1_0
#VLAN=vlan2
#VLAN=vlan3


# /etc/systemd/network/40-node1_0.netdev
[NetDev]
Name=node1_0
Kind=macvlan
# Optional: we use fix mac addr on vnics
MACAddress=00:01:02:03:04:00

[MACVLAN]
Mode=bridge


# /etc/systemd/network/40-node1_0.network
[Match]
Name=node1_0

[Network]
LinkLocalAddressing=no
LLMNR=false
IPv6AcceptRA=no
LLDP=no
Address=10.11.12.13/24
Gateway=10.11.12.200
# stuff which gets copied to /run/systemd/resolve/resolv.conf, when ready
Domains=my.do.main an.other.do.main
DNS=10.11.12.100 10.11.12.101

 
Example 2 (LXC):

# /zones/n00-00/config
...
lxc.net.0.type = macvlan
lxc.net.0.macvlan.mode = bridge
lxc.net.0.flags = up
lxc.net.0.link = mb0
lxc.net.0.name = n00-00_0
lxc.net.0.hwaddr = 00:01:02:03:04:01
...


# /zones/n00-00/rootfs/etc/systemd/network/40-n00-00_0.network
[Match]
Name=n00-00_0

[Network]
LLMNR=false
LLDP=no
LinkLocalAddressing=no
IPv6AcceptRouterAdvertisements=no
Address=10.12.11.0/16
Gateway=10.12.11.2
Domains=gpu.do.main


Have fun,
jel.
> Best regards,
> Ole
> 
> 
> > -Original Message-
> > From: slurm-users  On Behalf Of Ole 
> > Holm Nielsen
> > Sent: Monday, October 30, 2023 1:56 PM
> > To: slurm-users@lists.schedmd.com
> > Subject: Re: [slurm-users] How to delay the start of slurmd until 
> > Infiniband/OPA network is fully up?
> > 
> > ◆ This message was sent from a non-UWYO address. Please exercise caution 
> > when clicking links or opening attachments from external sources.
> > 
> > 
> > Hi Jens,
> > 
> > Thanks for your feedback:
> > 
> > On 30-10-2023 15:52, Jens Elkner wrote:
> > > Actually there is no need for such a script since
> > 

[slurm-users] multiple srun commands in the same SLURM script

2023-10-31 Thread Andrei Berceanu
Here is my SLURM script:

#!/bin/bash

#SBATCH --job-name="gpu_test"
#SBATCH --output=gpu_test_%j.log   # Standard output and error log
#SBATCH --account=berceanu_a+

#SBATCH --partition=gpu
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=31200m   # Reserve 32 GB of RAM per core
#SBATCH --time=12:00:00# Max allowed job runtime
#SBATCH --gres=gpu:16   # Allocate four GPUs

export SLURM_EXACT=1

srun --mpi=pmi2 -n 1 --gpus-per-node 1 python gpu_test.py &
srun --mpi=pmi2 -n 1 --gpus-per-node 1 python gpu_test.py &
srun --mpi=pmi2 -n 1 --gpus-per-node 1 python gpu_test.py &
srun --mpi=pmi2 -n 1 --gpus-per-node 1 python gpu_test.py &

wait

What I expect this to do is to run, in parallel, 4 independent copies
of the gpu_test.py python script, using 4 out of the 16 GPUs on this
node.

What it actually does is it only runs the script on a single GPU -
it's as if the other 3 srun commands do nothing. Perhaps they do not
see any available GPUs for some reason?

System info:

slurm 19.05.2

Linux 5.4.0-90-generic #101~18.04.1-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
gpu  up   infinite  1   idle thor

NodeName=thor Arch=x86_64 CoresPerSocket=24
   CPUAlloc=0 CPUTot=48 CPULoad=0.45
   AvailableFeatures=(null)
   ActiveFeatures=(null)
   Gres=gpu:16(S:0-1)
   NodeAddr=thor NodeHostName=thor
   OS=Linux 5.4.0-90-generic #101~18.04.1-Ubuntu SMP Fri Oct 22
09:25:04 UTC 2021
   RealMemory=1546812 AllocMem=0 FreeMem=1433049 Sockets=2 Boards=1
   State=IDLE ThreadsPerCore=1 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A
   Partitions=gpu
   BootTime=2023-08-09T14:58:01 SlurmdStartTime=2023-08-09T14:58:36
   CfgTRES=cpu=48,mem=1546812M,billing=48,gres/gpu=16
   AllocTRES=
   CapWatts=n/a
   CurrentWatts=0 AveWatts=0
   ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s

I can add any additional system info as required.

Thank you so much for taking the time to read this,

Regards,
Andrei



Re: [slurm-users] How to delay the start of slurmd until Infiniband/OPA network is fully up?

2023-10-31 Thread Ole Holm Nielsen

Hi Jeffrey,

On 10/30/23 20:15, Jeffrey R. Lang wrote:

The service is available in RHEL 8 via the EPEL package repository as 
system-networkd, i.e. systemd-networkd.x86_64   
253.4-1.el8epel


Thanks for the info.  We can install the systemd-networkd RPM from the 
EPEL repo as you suggest.


I tried to understand the properties of systemd-networkd before 
implementing it in our compute nodes.  While there are lots of networkd 
man-pages, it's harder to find an overview of the actual properties of 
networkd.  This is what I found:


* Networkd is a service included in recent versions of Systemd.  It seems 
to be an alternative to NetworkManager.


* Red Hat has stated that systemd-networkd is NOT going to be implemented 
in RHEL 8 or 9.


* Comparing systemd-networkd and NetworkManager: 
https://fedoracloud.readthedocs.io/en/latest/networkd.html


* Networkd is described in the Wikipedia article 
https://en.wikipedia.org/wiki/Systemd


While networkd seems to be really nifty, I hesitate to replace 
NetworkManager by networkd on our EL8 and EL9 systems because this is an 
unsupported and only lightly tested setup, and it may require additional 
work to keep our systems up-to-date in the future.


It seems to me that Max Rutkowski's solution in 
https://github.com/maxlxl/network.target_wait-for-interfaces is less 
intrusive than converting to systemd-networkd.


Best regards,
Ole



-Original Message-
From: slurm-users  On Behalf Of Ole Holm 
Nielsen
Sent: Monday, October 30, 2023 1:56 PM
To: slurm-users@lists.schedmd.com
Subject: Re: [slurm-users] How to delay the start of slurmd until 
Infiniband/OPA network is fully up?

◆ This message was sent from a non-UWYO address. Please exercise caution when 
clicking links or opening attachments from external sources.


Hi Jens,

Thanks for your feedback:

On 30-10-2023 15:52, Jens Elkner wrote:

Actually there is no need for such a script since
/lib/systemd/systemd-networkd-wait-online should be able to handle it.


It seems that systemd-networkd exists in Fedora FC38 Linux, but not in
RHEL 8 and clones, AFAICT.