Hi Christian,
The TOP stats were created just after a reboot. Except for the base machine
itself there is no application active. Next I have enabled disk devices and see
the memory usage increases. This is all done within only a one or two minutes.
As for the oom listing, below the listing when the error starts. A number of
devices have been enabled and then blk-mq reports an error. After this a number
for processes are killed, each with a similar oom listing. In the end so much
processes are killed that the machine must be rebooted to get a functional
system again. (Processes include my user's bash, systemd processes, syslog to
name a few.)
Nov 7 10:31:22 zlx-ml01 kernel: [ 115.530263] blk-mq: failed to allocate
request map
Nov 7 10:31:22 zlx-ml01 kernel: [ 115.530286] dasd.a35e01: 0.0.021b Setting
the DASD online failed because of a missing discipline
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028054] chccwdev invoked oom-killer:
gfp_mask=0x15200c2(GFP_HIGHUSER|__GFP_ACCOUNT), nodemask=(null), order=0,
oom_score_adj=0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028055] chccwdev cpuset=/ mems_allowed=0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028060] CPU: 0 PID: 1232 Comm: chccwdev
Not tainted 4.15.0-66-generic #75-Ubuntu
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028061] Hardware name: IBM 3906 M02 608
(z/VM 7.1.0)
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028061] Call Trace:
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028068] ([<000000000011418e>]
show_stack+0x56/0x80)
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028072] [<00000000008d7d8a>]
dump_stack+0x82/0xb8
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028076] [<00000000002da8d6>]
dump_header+0x8e/0x308
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028077] [<00000000002d9882>]
oom_kill_process+0x2ca/0x4d8
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028079] [<00000000002da4c6>]
out_of_memory+0x27e/0x588
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028080] [<00000000002e1782>]
__alloc_pages_nodemask+0xf62/0xfe8
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028083] [<0000000000394790>]
pipe_write+0x280/0x468
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028085] [<0000000000387416>]
new_sync_write+0x10e/0x158
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028087] [<000000000038a120>]
vfs_write+0xa8/0x1e8
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028088] [<000000000038a444>]
SyS_write+0x6c/0xf0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028090] [<00000000008f7458>]
system_call+0xdc/0x2c8
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028091] Mem-Info:
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028094] active_anon:1 inactive_anon:0
isolated_anon:0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028094] active_file:105
inactive_file:140 isolated_file:0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028094] unevictable:0 dirty:10
writeback:0 unstable:0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028094] slab_reclaimable:8050
slab_unreclaimable:11953
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028094] mapped:213 shmem:0
pagetables:783 bounce:0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028094] free:3589 free_pcp:0
free_cma:785
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028096] Node 0 active_anon:4kB
inactive_anon:0kB active_file:420kB inactive_file:560kB unevictable:0kB
isolated(anon):0kB isolated(file):0kB mapped:852kB dirty:40kB writeback:0kB
shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB
unstable:0kB all_unreclaimable? no
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028097] Node 0 DMA free:14356kB
min:11264kB low:14080kB high:16896kB active_anon:4kB inactive_anon:0kB
active_file:420kB inactive_file:560kB unevictable:0kB writepending:56kB
present:921600kB managed:683064kB mlocked:0kB kernel_stack:2688kB
pagetables:3132kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:3140kB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028100] lowmem_reserve[]: 0 0 0
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028102] Node 0 DMA: 151*4kB (UEC)
339*8kB (UEHC) 454*16kB (UEHC) 109*32kB (UEHC) 5*64kB (HC) 0*128kB 0*256kB
0*512kB 0*1024kB = 14388kB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028110] Node 0 hugepages_total=0
hugepages_free=0 hugepages_surp=0 hugepages_size=1024kB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028111] 255 total pagecache pages
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028111] 4 pages in swap cache
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028112] Swap cache stats: add 19045,
delete 19041, find 7856/13443
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028112] Free swap = 687084kB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028113] Total swap = 719852kB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028113] 230400 pages RAM
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028114] 0 pages HighMem/MovableOnly
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028114] 59634 pages reserved
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028115] 1024 pages cma reserved
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028115] Unreclaimable slab info:
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028116] Name Used
Total
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028117] qdio_q 31KB
31KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028120] TCPv6 31KB
31KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028122] mqueue_inode_cache 32KB
32KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028125] fsnotify_mark 3KB
3KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028126] UNIX 315KB
315KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028128] UDP 125KB
125KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028129] request_sock_TCP 15KB
15KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028129] TCP 32KB
32KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028130] hugetlbfs_inode_cache
31KB 31KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028132] eventpoll_pwq 15KB
15KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028132] eventpoll_epi 32KB
32KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028133] inotify_inode_mark 7KB
7KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028135] request_queue 62KB
62KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028136] blkdev_ioc 7KB
7KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028137] khugepaged_mm_slot 3KB
3KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028138] file_lock_cache 7KB
7KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028139] shmem_inode_cache 518KB
518KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028140] taskstats 15KB
15KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028141] sigqueue 7KB
7KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028142] kernfs_node_cache 2725KB
2725KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028144] nsproxy 7KB
7KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028145] vm_area_struct 712KB
712KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028146] signal_cache 320KB
320KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028146] sighand_cache 378KB
378KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028148] task_struct 700KB
768KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028155] cred_jar 385KB
624KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028156] anon_vma 138KB
138KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028157] pid 196KB
196KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028158] numa_policy 31KB
31KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028159] trace_event_file 106KB
106KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028160] ftrace_event_field 143KB
143KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028161] task_group 31KB
31KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028162] dma-kmalloc-4096 96KB
96KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028163] dma-kmalloc-1024 96KB
96KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028164] dma-kmalloc-512 32KB
32KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028164] dma-kmalloc-256 8KB
8KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028165] dma-kmalloc-128 4KB
4KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028166] dma-kmalloc-64 4KB
4KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028166] dma-kmalloc-32 4KB
4KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028167] dma-kmalloc-16 4KB
4KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028168] dma-kmalloc-8 4KB
4KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028168] dma-kmalloc-192 7KB
7KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028169] kmalloc-8192 18784KB
18784KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028170] kmalloc-4096 5732KB
5756KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028171] kmalloc-2048 7024KB
7024KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028171] kmalloc-1024 5248KB
5248KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028172] kmalloc-512 1520KB
1520KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028174] kmalloc-256 323KB
384KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028175] kmalloc-192 409KB
409KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028175] kmalloc-128 140KB
140KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028179] kmalloc-96 266KB
303KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028179] kmalloc-64 152KB
152KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028180] kmalloc-32 96KB
96KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028181] kmalloc-16 56KB
56KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028181] kmalloc-8 72KB
72KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028182] kmem_cache_node 8KB
8KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028183] kmem_cache 64KB
64KB
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028183] [ pid ] uid tgid total_vm
rss pgtables_bytes swapents oom_score_adj name
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028186] [ 379] 0 379 11886
67 131072 180 0 systemd-journal
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028187] [ 401] 0 401 37932
0 63488 86 0 lvmetad
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028189] [ 439] 0 439 3677
10 69632 453 -1000 systemd-udevd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028193] [ 471] 100 471 2766
0 81920 177 0 systemd-network
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028204] [ 623] 101 623 2519
2 83968 221 0 systemd-resolve
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028205] [ 624] 0 624 1530
0 67584 110 0 rpcbind
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028207] [ 625] 62583 625 20786
0 86016 165 0 systemd-timesyn
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028208] [ 638] 102 638 54882
31 98304 258 0 rsyslogd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028209] [ 640] 0 640 58153
0 96256 247 0 accounts-daemon
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028210] [ 641] 0 641 2447
0 81920 187 0 systemd-logind
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028212] [ 642] 0 642 24624
2 108544 1956 0 networkd-dispat
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028213] [ 643] 0 643 1224
2 59392 69 0 cron
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028214] [ 644] 103 644 1580
2 69632 171 -900 dbus-daemon
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028216] [ 679] 0 679 1032
0 53248 38 0 agetty
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028217] [ 682] 0 682 1032
0 53248 38 0 agetty
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028218] [ 683] 0 683 1032
0 55296 39 0 agetty
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028219] [ 690] 0 690 8933
2 71680 113 0 automount
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028221] [ 722] 0 722 2476
0 81920 182 -1000 sshd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028222] [ 852] 0 852 8971
0 67584 127 0 master
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028223] [ 853] 107 853 9050
2 92160 127 0 pickup
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028225] [ 854] 107 854 9064
2 75776 130 0 qmgr
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028226] [ 857] 0 857 2718
2 118784 245 0 sshd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028227] [ 860] 1000 860 2942
0 88064 271 0 systemd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028228] [ 861] 1000 861 3504
0 106496 553 0 (sd-pam)
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028230] [ 900] 1000 900 2718
0 118784 264 0 sshd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028231] [ 903] 1000 903 1597
2 61440 303 0 bash
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028232] [ 921] 0 921 1654
2 77824 124 0 sudo
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028233] [ 922] 0 922 1534
0 75776 118 0 su
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028234] [ 923] 0 923 1612
0 59392 302 0 bash
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028236] [ 1000] 0 1000 2718
2 100352 246 0 sshd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028237] [ 1041] 1000 1041 2718
0 100352 269 0 sshd
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028238] [ 1042] 1000 1042 1597
0 77824 303 0 bash
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028239] [ 1052] 1000 1052 1857
2 69632 174 0 top
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028240] [ 1219] 0 1219 1345
2 55296 102 0 chccwdev
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028242] [ 1232] 0 1232 1345
81 53248 98 0 chccwdev
Nov 7 10:31:23 zlx-ml01 kernel: [ 116.028243] [ 1233] 0 1233 1345
0 53248 98 0 chccwdev
Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
Berry van Sleeuwen
Flight Forum 3000 5657 EW Eindhoven
-----Original Message-----
From: Linux on 390 Port <[email protected]> On Behalf Of Christian
Ehrhardt
Sent: Friday, November 08, 2019 3:45 PM
To: [email protected]
Subject: Re: Ubuntu 18-04 unable to add DASD
On Thu, Nov 7, 2019 at 6:16 PM van Sleeuwen, Berry <[email protected]>
wrote:
>
> Hi Christian,
>
> Indeed I did expect to easily add disks to the machine. Our SUSE machines
> have a lot of DASD and run in pretty small machines. So I was very surprised
> to see this problem.
>
> As for process, in top [kswapd0] showed the large CPU load. Also the memory
> in top showed all memory was full and the eventhough it's just an 'empty'
> machine there is swap space usage increased. I didn't list iotop, if anything
> I was already happy to be able to start top. But since [kswapd0] was the
> active process I assumed swapping is the cause of the high IO load. In z/VM I
> could see over 4000 IO/s on DASD (performance toolkit panel FCX115).
Ok, lets for now assume that all CPU and I/O was driven by swapping.
> Meanwhile I have moved the installed image from the installer (virtual
> storage 2GB) into the production configuration (virtual storage 32GB). Now I
> can activate disks without any problem, but it also shows why it was a
> problem in the installer system.
>
> After boot: (29 ECKD disks)
> Tasks: 136 total, 1 running, 60 sleeping, 0 stopped, 0 zombie
> KiB Mem : 32807736 total, 31001912 free, 1637604 used, 168220 buff/cache
> KiB Swap: 719852 total, 719852 free, 0 used. 30877584 avail Mem
Unfortunately these stats means nothing as it has all sorts of caching and
other things that would be pushed out of memory before swapping or even an OOM.
>
> So the with the primary disk configuration the machine already assigned
> 1.6GB, given the available memory in the installer system that was already
> too much. After I activated another 23 disks, the used memory is now 2.4GB
> and after activating another 7 disk it's 2.6GB.
>
> I do have the OOM list in /var/log/syslog but I am unsure what to look for.
Usually an OOM includes something like a top-scorer list of apps and their
consumption.
It is a table starting with a header like this (details depend on the kernel
version)
[ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents
oom_score_adj name
But actually the whole section starting with "Out of memory: Kill ..."
might help.
@Frank - have you recently tried many devices on a small system (or could you
give it a try) and saw something similar?
> Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> Berry van Sleeuwen Flight Forum 3000 5657 EW Eindhoven
>
> -----Original Message-----
> From: Linux on 390 Port <[email protected]> On Behalf Of
> Christian Ehrhardt
> Sent: Thursday, November 07, 2019 4:07 PM
> To: [email protected]
> Subject: Re: Ubuntu 18-04 unable to add DASD
>
> On Thu, Nov 7, 2019 at 3:57 PM van Sleeuwen, Berry
> <[email protected]> wrote:
> >
> > Hi All,
> >
> > I have installed an Ubuntu 19-04 but as it turns out the application
> > requires version 18-04. So I have installed a new 18-04 system.
> >
> > Next I tried to move the LVM from the 19-04 machine into the 18-04 machine.
> > But I can't get the disks online. The first few disks are fine but at a
> > certain point the server crashes. During "chccwdev -e" at some point all
> > memory is exhausted and I see a lot of CPU and IO. Eventually the
> > OOM-killer kicks in to kill various processes. The server is no longer
> > responding and only a reboot (force and xautolog) will get the server back
> > online.
>
> Hi Berry,
> I have myself enabled hundreds of dasds without such issues, so it must be
> something special to your system/config that we have to find out.
> We all know that chccwdev shouldn't do any I/O other than to the entries in
> /sys so I'm wondering what goes on.
>
> You said you have a lot of CPU+IO and then the OOM-killer kicks in.
> Can you outline
> a) which processes spin when being CPU intensive - gather with e.g.
> top in batch mode
> b) which processes drive that I/O - gather with iotop
> (btw - do you see that I/O in VM or in Linux)
> c) when the OOM killer kicks in (or if you can before) which processes
> consume huge chunks of memory?
> If you have nothing else, at least the OOM report that dmesg holds should
> have a list.
>
> From there we can start to gather ideas what it might be.
>
> Kind Regards,
> Christian
>
> This e-mail and the documents attached are confidential and intended solely
> for the addressee; it may also be privileged. If you receive this e-mail in
> error, please notify the sender immediately and destroy it. As its integrity
> cannot be secured on the Internet, Atos' liability cannot be triggered for
> the message content. Although the sender endeavours to maintain a computer
> virus-free network, the sender does not warrant that this transmission is
> virus-free and will not be liable for any damages resulting from any virus
> transmitted. On all offers and agreements under which Atos Nederland B.V.
> supplies goods and/or services of whatever nature, the Terms of Delivery from
> Atos Nederland B.V. exclusively apply. The Terms of Delivery shall be
> promptly submitted to you on your request.
>
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions, send
> email to [email protected] with the message: INFO LINUX-390 or
> visit
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww2.
> marist.edu%2Fhtbin%2Fwlvindex%3FLINUX-390&data=02%7C01%7CBerry.van
> Sleeuwen%40atos.net%7C076a5ccafc004f1d4ce808d7645a7a74%7C33440fc6b7c74
> 12cbb730e70b0198d5a%7C0%7C0%7C637088212072994774&sdata=S%2BAKLXF50
> Zy67xvWLfAmVIqtoFoR9wWRmhY5W%2F%2FnYS8%3D&reserved=0
--
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions, send email to
[email protected] with the message: INFO LINUX-390 or visit
https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww2.marist.edu%2Fhtbin%2Fwlvindex%3FLINUX-390&data=02%7C01%7CBerry.vanSleeuwen%40atos.net%7C076a5ccafc004f1d4ce808d7645a7a74%7C33440fc6b7c7412cbb730e70b0198d5a%7C0%7C0%7C637088212072994774&sdata=S%2BAKLXF50Zy67xvWLfAmVIqtoFoR9wWRmhY5W%2F%2FnYS8%3D&reserved=0
This e-mail and the documents attached are confidential and intended solely for
the addressee; it may also be privileged. If you receive this e-mail in error,
please notify the sender immediately and destroy it. As its integrity cannot be
secured on the Internet, Atos’ liability cannot be triggered for the message
content. Although the sender endeavours to maintain a computer virus-free
network, the sender does not warrant that this transmission is virus-free and
will not be liable for any damages resulting from any virus transmitted. On all
offers and agreements under which Atos Nederland B.V. supplies goods and/or
services of whatever nature, the Terms of Delivery from Atos Nederland B.V.
exclusively apply. The Terms of Delivery shall be promptly submitted to you on
your request.
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390