Re: Pervasive disk encryption questions

2020-01-22 Thread Timothy Sipples
Reinhard Buendgen wrote:
>As for the recommendation, I am not sure where it is written. But I
>remember that there was a time where IBM would only sell at least two to
>enforce/encourage redundancy. But I am not sure whether this is still
>true fro small systems.

I believe it's possible to order every IBM Z and IBM LinuxONE machine with
even a single Crypto Express feature. The configuration tool will warn
against it, but it's possible.

>Anyway one reason to have redundancy within you system is the support
>of non-disruptive service to your adapters. I guess planned maintenance
>is an event that is more frequent then actual unplanned failures.

Sure, and that all broadly makes sense, which is why IBM warns that a
single feature is not generally recommended. (I can think of a couple
exceptions, which is probably why IBM allows such orders to my knowledge.)
But it's a very separate question whether it makes sense to configure two
domains per Linux guest. Linux guests can bounce up and down all the time,
planned or unplanned, and you must plan for that reality and deal with it
already, especially in a production environment.

>But again if your HA failover solution is really fast, you can trigger a
>planned failover ... well that add sto the management bill and you will
>observe some outage that is certainly longer than the retry the kernel
>performs within the system...

Right, but you've already got to prepare for that and do that for myriad
reasons, "all the time."

>once a file system is mounted on a PAES encrypted dm-crypt volume you no
>longer need the CryptoExpress adapter as long as your Linux system runs
>in that guest. Protected key dm-crypt only needs the CryptoExpress
>adapters when the dm-crypt volume gets is opened (which must happen
>before the mount step). For the dm-crypt open operation with the PAES
>cipher a CCA secure key is provided to the kernel and the kernel
>transforms this secure key (with the help of the Crypro Express adapter)
>into a protected key. Once dm-crypt knows the protected key, it no
>longer need the secure key or the crypto adapter, it uses the protected
>key instead. This property is also nice if you want to change the master
>keys of your adapter. If you can do that during a period where you do
>not need to open a dm-crypt device, it will work concurrently to using
>your volumes.

That's great news. So, to summarize, a whole CCA domain can go offline for
whatever reason(s), and the Linux guest that depends on that CCA domain for
dm-crypt/LUKS2 will keep chugging along as long as its file systems are
mounted (and as long as it doesn't need some other vital-to-the-guest
service from the CCA domain). Then that Linux guest will be able to mount
additional encrypted volumes when the CCA domain comes back online and is
otherwise suitably configured. In other words, with reasonable assumptions,
a temporary CCA domain outage is nondisruptive to its Linux guest. That's
awesome!

Anyway, "Spend your CCA domains wisely" if you think it'll be a
constraining number, but I think there's a good argument that one CCA
domain per Linux guest can be a perfectly reasonable, viable, production
configuration.


Timothy Sipples
IT Architect Executive, Digital Asset & Other Industry Solutions, IBM Z &
LinuxONE


E-Mail: sipp...@sg.ibm.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-22 Thread Reinhard Buendgen

Hi,

as for error indications adapter domain failures lead to the AP queue 
being set to the offline state in the kernel, a state that can be 
displayed with lszcrypt.


Further depending on the type of error an error will be logged in the 
syslog. More information on errors can be found in 
/sys/kernel/debug/s390dbf/zcrypt.


I looked up p. 6 of the red book referenced below. With horror I 
discovered the statement you referred to. Who ever listened to one of my 
Pervasive Encryption presentations knows that I tell right the opposite. 
We at IBM make a serious effort to make security as digestible as 
possible, but there wont be a single secure-it-all button -- well, with 
the exception of the off-button.


-Reinhard

On 22.01.20 01:32, Marcy Cortes wrote:

This brings up another set of questions from me :)

Under the assumption that hardware eventually fails and I could lose a card...

If there's two on a guest I assume things seamlessly continue on if one card 
fails?  Do I get messages on Linux, VM, or the HW if that should happen?

If there's only one and that card fails, does the file system get unmounted 
and/or throw errors?  Or does it continue on and just have issues at next 
reboot?

Is there any way to test card failure?

Yes, we have plenty of HA in many forms (tsamp, db2 hadr, external load 
balancers, multiple cecs, multiple servers, multiple data centers, gpfs, etc) 
and they are complex with different recovery times and data loss as you mention.

I'm still in exploration phase so I can't answer the how many are needed.  I'm trying to 
tell mgmt. what we can do with what we have, what it will mean to grow it, and what value 
it provides.   I'm afraid that there is some belief that we can "just do all of 
it".   And what real value is there when the only group this buys protection from is 
our z storage admins (we already have hw level to protect devices that leave the 
datacenter).Slick marketing presentations abound  :)

 From page 6 of this redpiece here 
http://www.redbooks.ibm.com/redpapers/pdfs/redp5464.pdf
"IBM Z makes it possible, for the first time, for organizations to pervasively 
encrypt data associated with an entire application, cloud service, or database in flight 
or at rest with one click."

Still looking for that one click button!
Marcy

-Original Message-
From: Linux on 390 Port  On Behalf Of Reinhard Buendgen
Sent: Tuesday, January 21, 2020 12:55 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] Pervasive disk encryption questions

Tim,

I fully agree. Yet the Z platform is designed for RAS where
the"R"eliabiity translates to redundancy of the available resources
either within the system for built-in resources or as an configuration
option for external resources. The number 680 just reflects the
recommendation to achieve crypto redundancy per configuration (once
configured properly the Linux kernel will do the rest).

Whether that form of redundancy is the best form in an specific customer
environment is up to the customer.

As for the level of redundancy (device redundancy, HA cluster, or DR
cluster), it is  the customers choice to decide the kind of penalty (ms,
secs , mins) he or she is willing to accept in case of a the failure of
a single resource. Also note that for certain workloads (workloads
managing a shared state,  e.g. R/W data bases), HA clusters may be
pretty complex and impact performance.

-Reinhard

On 21.01.20 08:59, Timothy Sipples wrote:

I'd like to comment on the 680 number for a moment. I don't think 680 is
the correct number of Linux guests that can use protected key
dm-crypt/LUKS2 encrypted volumes. I'd like to argue the case for why the
current maximum number is 1,360 guests per machine that can use this
particular feature. (It's a security feature that doesn't exist on any
other platform, we should note, so it's either 680 or 1,360 more Linux
guests than any other machine.)

The number 680 is derived by taking the current maximum number of physical
Crypto Express features per machine (16), configuring them all in CCA mode,
multiplying by the current maximum number of domains per feature (85)(*),
then dividing in half, with the idea being that each Linux guest would
benefit from the services of two CCA domains spread across two physical
Crypto Express features.

I think this last assumption is fairly arbitrary. A single Linux guest is
one kernel running within only one instance of the hypervisor (which may or
may not be nested). It's a singleton, inherently. In a production
environment you'd presumably have something more than singleton Linux
guests running particular workloads, at least if they're important
workloads. You pick up redundancy there. If a particular Linux guest is
offline for whatever reason, there's another handling the workload (or
ready to handle it), with its own Crypto Express domain.

You certainly could decide to add Crypto Express redundancy on a per guest
basis in addition to whole Linux guest 

Re: Pervasive disk encryption questions

2020-01-22 Thread R. J. Moore

Marcy, in answer to your question on error messages from VM:

it depends on whether the Linux guest is APVIRT or APDED.

With APDED guests, VM plays a minimal role - basically a configuration 
role that assigns a subset of its crypto resources to the guest. 
Thereafter the guest has direct access to those assigned h/w resources. 
The APDED guest's AP numbers and Domain numbers are precisely the same 
as those assigned to the zVM LPAR, except of course the guest sees (and 
is authorized only to see) a subset. Error reporting will be largely in 
the hands of Linux.



With APVIRT guests we consign a group of VM's crypto resources to a 
shared pool. VM manages that pool in the following ways assuming the 
Dynamic Crypto APAR (VM66266) is installed:


1) it directs APVIRT guest crypto requests to a member of the pool. Each 
guest thinks it has AP 01, Domain 01. This is in fact a simulated 
(virtualized) crypto resource.


2) it directs the  response from a member of the pool to the originating 
guest. By the way there's no chance of cross-contamination of one APVIRT 
guest's crypto responses with another: each requests is uniquely tagged 
to the originating guest and the tagging is carried forward by the h/w 
into the associated response.


3) It redirects requests sent to failed crypto resources to working 
resources without intervention by the guest.


4) It monitors for troublesome messages that seem to cause repeated 
errors on being continually redirected and fails the request is the 
message is redirected more that 10 times.


5) If all resources in the shared pool are temporarily unavailable (busy 
state on the query command) then VM will warn the operator. However, VM 
will forward the request automatically as soon a resource in the shared 
pool becomes available.


6) If all resource in the shared pool become permanently unavailable 
(checkstop, configured off, unassigned) then we warn the operator and 
kill off the messages with simulated h/w failure errors.



In cases 4-6, there will be messages issued by VM's control program to 
the operator. We maintain counts of similar errors and report those 
counts in the messages. But, so as not to flood the console, we suppress 
messages triggered by the same resource or guest or requests to one 
every two minutes. There were a number of new messages created with 
VM66266 to address the APVIRT RAS enhancements.




The bottom line is you'll be more dependent on Linux for crypto errors 
with APDED guests and more dependent on VM with APVIRT guests.


- Richard (zVM crypto/CP Dev)


On 22/01/2020 00:32, Marcy Cortes wrote:

This brings up another set of questions from me :)

Under the assumption that hardware eventually fails and I could lose a card...

If there's two on a guest I assume things seamlessly continue on if one card 
fails?  Do I get messages on Linux, VM, or the HW if that should happen?

If there's only one and that card fails, does the file system get unmounted 
and/or throw errors?  Or does it continue on and just have issues at next 
reboot?

Is there any way to test card failure?

Yes, we have plenty of HA in many forms (tsamp, db2 hadr, external load 
balancers, multiple cecs, multiple servers, multiple data centers, gpfs, etc) 
and they are complex with different recovery times and data loss as you mention.

I'm still in exploration phase so I can't answer the how many are needed.  I'm trying to 
tell mgmt. what we can do with what we have, what it will mean to grow it, and what value 
it provides.   I'm afraid that there is some belief that we can "just do all of 
it".   And what real value is there when the only group this buys protection from is 
our z storage admins (we already have hw level to protect devices that leave the 
datacenter).Slick marketing presentations abound  :)

 From page 6 of this redpiece here 
http://www.redbooks.ibm.com/redpapers/pdfs/redp5464.pdf
"IBM Z makes it possible, for the first time, for organizations to pervasively 
encrypt data associated with an entire application, cloud service, or database in flight 
or at rest with one click."

Still looking for that one click button!
Marcy

-Original Message-
From: Linux on 390 Port  On Behalf Of Reinhard Buendgen
Sent: Tuesday, January 21, 2020 12:55 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] Pervasive disk encryption questions

Tim,

I fully agree. Yet the Z platform is designed for RAS where
the"R"eliabiity translates to redundancy of the available resources
either within the system for built-in resources or as an configuration
option for external resources. The number 680 just reflects the
recommendation to achieve crypto redundancy per configuration (once
configured properly the Linux kernel will do the rest).

Whether that form of redundancy is the best form in an specific customer
environment is up to the customer.

As for the level of redundancy (device redundancy, HA cluster, or DR
cluster), it is  the customers choice to decide 

Re: Pervasive disk encryption questions

2020-01-22 Thread Reinhard Buendgen
As for the recommendation, I am not sure where it is written. But I 
remember that there was a time where IBM would only sell at least two to 
enforce/encourage redundancy. But I am not sure whether this is still 
true fro small systems. Anyway one reason to have redundancy within you 
system is the support of non-disruptive service to your adapters. I 
guess planned maintenance is an event that is more frequent then actual 
unplanned failures.


But again if your HA failover solution is really fast, you can trigger a 
planned failover ... well that add sto the management bill and you will 
observe some outage that is certainly longer than the retry the kernel 
performs within the system...


> If there's only one and that card fails, does the file system get 
unmounted


once a file system is mounted on a PAES encrypted dm-crypt volume you no 
longer need the CryptoExpress adapter as long as your Linux system runs 
in that guest. Protected key dm-crypt only needs the CryptoExpress 
adapters when the dm-crypt volume gets is opened (which must happen 
before the mount step). For the dm-crypt open operation with the PAES 
cipher a CCA secure key is provided to the kernel and the kernel 
transforms this secure key (with the help of the Crypro Express adapter) 
into a protected key. Once dm-crypt knows the protected key, it no 
longer need the secure key or the crypto adapter, it uses the protected 
key instead. This property is also nice if you want to change the master 
keys of your adapter. If you can do that during a period where you do 
not need to open a dm-crypt device, it will work concurrently to using 
your volumes.


So you want the crypto adapters to be available when you open the 
volumes (typically at system start) and here it comes in handy if there 
is a backup adapter in case you primary does not work for some reason 
because otherwise you cannot get to mounting your FS.


In Linux too, you can set crypto adapter domains offline (see man 
chzcrypt for details). Note this just tells the kernel to no longer use 
the adapter or adapter domain. It does not have any effect on the crypto 
HW or FW. As for looking at the state of your crypto adapters (domains) 
lszcrypt is your friend.


-Reinhard

On 22.01.20 08:12, Timothy Sipples wrote:

Reinhard Buendgen wrote:

The number 680 just reflects the recommendation to achieve
crypto redundancy per configuration (once configured properly
the Linux kernel will do the rest).

Where is that recommendation coming from? Is there any nuance to it, and
does it still make sense?


As for the level of redundancy (device redundancy, HA cluster, or DR
cluster), it is  the customers choice to decide the kind of penalty (ms,
secs , mins) he or she is willing to accept in case of a the failure of
a single resource. Also note that for certain workloads (workloads
managing a shared state,  e.g. R/W data bases), HA clusters may be
pretty complex and impact performance.

Sure, but "What else is new?" A single Linux guest has a single kernel, and
it's a single point of failure -- a relatively big one. Metaphorically
speaking, having a second bucket positioned at the same well doesn't help
me water the plants any better when I have no water, and I must already
plan for having no water.

Moreover, if you are incurring these various overheads, penalties, and
complexities already -- as you typically would be in a production
deployment, unavoidably -- does it still make sense to double the
consumption rate of a somewhat finite resource (CCA domains), particularly
if it's constraining, and end up with a *quad* (a pair of Linux guests,
clustered, sitting atop 4 CCA domains)? And if a "quad" makes sense there,
does it make equal sense to double every component everywhere in the
delivery of application services? For example, if you're running a pair of
clustered Java application servers, shouldn't you actually have *four* of
them (two running in each Linux guest)? Then, if one Java application
server instance fails, you still have both Linux guests/kernels providing
service. That's fundamentally the same redundancy idea, right? (And we're
just getting warmed up. ;))

Marcy Cortes wrote:

If there's only one and that card fails, does the file system get

unmounted

and/or throw errors?  Or does it continue on and just have issues at next
reboot?

That's a really great question, too. It might not be as dire an event as
one might ordinarily think with protected key operations (only, and fully
instantiated), but I'll let Reinhard chime in.


Is there any way to test card failure?

How about just issuing a VARY OFFLINE CRYPTO command in z/VM? In a test
z/VM LPAR, of course! Here's the syntax:

Q CRYPTO DOMAIN

to find the list of Crypto Express adapters and their domains. You should
see something like "CEX6C" or "CEX7C" for the Crypto Express features that
are configured in CCA mode. So let's suppose that "AP 013" is the Crypto
Express adapter that you want to vary offline. This 

Re: Pervasive disk encryption questions

2020-01-21 Thread Timothy Sipples
Reinhard Buendgen wrote:
>The number 680 just reflects the recommendation to achieve
>crypto redundancy per configuration (once configured properly
>the Linux kernel will do the rest).

Where is that recommendation coming from? Is there any nuance to it, and
does it still make sense?

>As for the level of redundancy (device redundancy, HA cluster, or DR
>cluster), it is  the customers choice to decide the kind of penalty (ms,
>secs , mins) he or she is willing to accept in case of a the failure of
>a single resource. Also note that for certain workloads (workloads
>managing a shared state,  e.g. R/W data bases), HA clusters may be
>pretty complex and impact performance.

Sure, but "What else is new?" A single Linux guest has a single kernel, and
it's a single point of failure -- a relatively big one. Metaphorically
speaking, having a second bucket positioned at the same well doesn't help
me water the plants any better when I have no water, and I must already
plan for having no water.

Moreover, if you are incurring these various overheads, penalties, and
complexities already -- as you typically would be in a production
deployment, unavoidably -- does it still make sense to double the
consumption rate of a somewhat finite resource (CCA domains), particularly
if it's constraining, and end up with a *quad* (a pair of Linux guests,
clustered, sitting atop 4 CCA domains)? And if a "quad" makes sense there,
does it make equal sense to double every component everywhere in the
delivery of application services? For example, if you're running a pair of
clustered Java application servers, shouldn't you actually have *four* of
them (two running in each Linux guest)? Then, if one Java application
server instance fails, you still have both Linux guests/kernels providing
service. That's fundamentally the same redundancy idea, right? (And we're
just getting warmed up. ;))

Marcy Cortes wrote:
>If there's only one and that card fails, does the file system get
unmounted
>and/or throw errors?  Or does it continue on and just have issues at next
>reboot?

That's a really great question, too. It might not be as dire an event as
one might ordinarily think with protected key operations (only, and fully
instantiated), but I'll let Reinhard chime in.

>Is there any way to test card failure?

How about just issuing a VARY OFFLINE CRYPTO command in z/VM? In a test
z/VM LPAR, of course! Here's the syntax:

Q CRYPTO DOMAIN

to find the list of Crypto Express adapters and their domains. You should
see something like "CEX6C" or "CEX7C" for the Crypto Express features that
are configured in CCA mode. So let's suppose that "AP 013" is the Crypto
Express adapter that you want to vary offline. This command should do that:

VARY OFFLINE CRYPTO AP 13


Timothy Sipples
IT Architect Executive, Digital Asset & Other Industry Solutions, IBM Z &
LinuxONE


E-Mail: sipp...@sg.ibm.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-21 Thread Marcy Cortes
This brings up another set of questions from me :)

Under the assumption that hardware eventually fails and I could lose a card... 

If there's two on a guest I assume things seamlessly continue on if one card 
fails?  Do I get messages on Linux, VM, or the HW if that should happen? 

If there's only one and that card fails, does the file system get unmounted 
and/or throw errors?  Or does it continue on and just have issues at next 
reboot? 

Is there any way to test card failure? 

Yes, we have plenty of HA in many forms (tsamp, db2 hadr, external load 
balancers, multiple cecs, multiple servers, multiple data centers, gpfs, etc) 
and they are complex with different recovery times and data loss as you 
mention.   

I'm still in exploration phase so I can't answer the how many are needed.  I'm 
trying to tell mgmt. what we can do with what we have, what it will mean to 
grow it, and what value it provides.   I'm afraid that there is some belief 
that we can "just do all of it".   And what real value is there when the only 
group this buys protection from is our z storage admins (we already have hw 
level to protect devices that leave the datacenter).Slick marketing 
presentations abound  :) 

From page 6 of this redpiece here 
http://www.redbooks.ibm.com/redpapers/pdfs/redp5464.pdf
"IBM Z makes it possible, for the first time, for organizations to pervasively 
encrypt data associated with an entire application, cloud service, or database 
in flight or at rest with one click. "

Still looking for that one click button!
Marcy

-Original Message-
From: Linux on 390 Port  On Behalf Of Reinhard Buendgen
Sent: Tuesday, January 21, 2020 12:55 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] Pervasive disk encryption questions

Tim,

I fully agree. Yet the Z platform is designed for RAS where 
the"R"eliabiity translates to redundancy of the available resources 
either within the system for built-in resources or as an configuration 
option for external resources. The number 680 just reflects the 
recommendation to achieve crypto redundancy per configuration (once 
configured properly the Linux kernel will do the rest).

Whether that form of redundancy is the best form in an specific customer 
environment is up to the customer.

As for the level of redundancy (device redundancy, HA cluster, or DR 
cluster), it is  the customers choice to decide the kind of penalty (ms, 
secs , mins) he or she is willing to accept in case of a the failure of 
a single resource. Also note that for certain workloads (workloads 
managing a shared state,  e.g. R/W data bases), HA clusters may be 
pretty complex and impact performance.

-Reinhard

On 21.01.20 08:59, Timothy Sipples wrote:
> I'd like to comment on the 680 number for a moment. I don't think 680 is
> the correct number of Linux guests that can use protected key
> dm-crypt/LUKS2 encrypted volumes. I'd like to argue the case for why the
> current maximum number is 1,360 guests per machine that can use this
> particular feature. (It's a security feature that doesn't exist on any
> other platform, we should note, so it's either 680 or 1,360 more Linux
> guests than any other machine.)
>
> The number 680 is derived by taking the current maximum number of physical
> Crypto Express features per machine (16), configuring them all in CCA mode,
> multiplying by the current maximum number of domains per feature (85)(*),
> then dividing in half, with the idea being that each Linux guest would
> benefit from the services of two CCA domains spread across two physical
> Crypto Express features.
>
> I think this last assumption is fairly arbitrary. A single Linux guest is
> one kernel running within only one instance of the hypervisor (which may or
> may not be nested). It's a singleton, inherently. In a production
> environment you'd presumably have something more than singleton Linux
> guests running particular workloads, at least if they're important
> workloads. You pick up redundancy there. If a particular Linux guest is
> offline for whatever reason, there's another handling the workload (or
> ready to handle it), with its own Crypto Express domain.
>
> You certainly could decide to add Crypto Express redundancy on a per guest
> basis in addition to whole Linux guest redundancy, but if you're going to
> measure the outer bound maximum number I don't think you ought to assume
> "redundancy squared." It seems rather arbitrary to me that that's where you
> draw that particular line.
>
> There is no intrinsic limit to the number of Linux guests using
> dm-crypt/LUKS2 encrypted volumes with clear keys.
>
> You can also decide on a guest-by-guest basis whether to double up on
> Crypto Express CCA domains or not, which would mean a current upper bound
> limit somewhere between 680 and 1,360 Linux guests using CCA domains.
> And/or you can decide how many Crypto Express features you want to
> configure in another mode, notably EP11. If for example you configure two
> 

Re: Pervasive disk encryption questions

2020-01-21 Thread Reinhard Buendgen

Tim,

I fully agree. Yet the Z platform is designed for RAS where 
the"R"eliabiity translates to redundancy of the available resources 
either within the system for built-in resources or as an configuration 
option for external resources. The number 680 just reflects the 
recommendation to achieve crypto redundancy per configuration (once 
configured properly the Linux kernel will do the rest).


Whether that form of redundancy is the best form in an specific customer 
environment is up to the customer.


As for the level of redundancy (device redundancy, HA cluster, or DR 
cluster), it is  the customers choice to decide the kind of penalty (ms, 
secs , mins) he or she is willing to accept in case of a the failure of 
a single resource. Also note that for certain workloads (workloads 
managing a shared state,  e.g. R/W data bases), HA clusters may be 
pretty complex and impact performance.


-Reinhard

On 21.01.20 08:59, Timothy Sipples wrote:

I'd like to comment on the 680 number for a moment. I don't think 680 is
the correct number of Linux guests that can use protected key
dm-crypt/LUKS2 encrypted volumes. I'd like to argue the case for why the
current maximum number is 1,360 guests per machine that can use this
particular feature. (It's a security feature that doesn't exist on any
other platform, we should note, so it's either 680 or 1,360 more Linux
guests than any other machine.)

The number 680 is derived by taking the current maximum number of physical
Crypto Express features per machine (16), configuring them all in CCA mode,
multiplying by the current maximum number of domains per feature (85)(*),
then dividing in half, with the idea being that each Linux guest would
benefit from the services of two CCA domains spread across two physical
Crypto Express features.

I think this last assumption is fairly arbitrary. A single Linux guest is
one kernel running within only one instance of the hypervisor (which may or
may not be nested). It's a singleton, inherently. In a production
environment you'd presumably have something more than singleton Linux
guests running particular workloads, at least if they're important
workloads. You pick up redundancy there. If a particular Linux guest is
offline for whatever reason, there's another handling the workload (or
ready to handle it), with its own Crypto Express domain.

You certainly could decide to add Crypto Express redundancy on a per guest
basis in addition to whole Linux guest redundancy, but if you're going to
measure the outer bound maximum number I don't think you ought to assume
"redundancy squared." It seems rather arbitrary to me that that's where you
draw that particular line.

There is no intrinsic limit to the number of Linux guests using
dm-crypt/LUKS2 encrypted volumes with clear keys.

You can also decide on a guest-by-guest basis whether to double up on
Crypto Express CCA domains or not, which would mean a current upper bound
limit somewhere between 680 and 1,360 Linux guests using CCA domains.
And/or you can decide how many Crypto Express features you want to
configure in another mode, notably EP11. If for example you configure two
Crypto Express features in EP11 mode, then there are up to 14 available for
CCA mode, supporting up to 1,190 Linux guests using protected key
dm-crypt/LUKS2 (up to 595 if you decide to double them all up, or somewhere
in between if you double up some of them).

Anyway, this is an interesting discussion! If you're pushing these limits
or at least forecast you will, let IBM know, officially.

(*) This particular number is 40 on IBM z14 ZR1, LinuxONE Rockhopper II,
and their predecessor models. Adjust the rest of the math accordingly for
these machine models.


Timothy Sipples
IT Architect Executive, Digital Asset & Other Industry Solutions, IBM Z &
LinuxONE


E-Mail: sipp...@sg.ibm.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-21 Thread Timothy Sipples
I'd like to comment on the 680 number for a moment. I don't think 680 is
the correct number of Linux guests that can use protected key
dm-crypt/LUKS2 encrypted volumes. I'd like to argue the case for why the
current maximum number is 1,360 guests per machine that can use this
particular feature. (It's a security feature that doesn't exist on any
other platform, we should note, so it's either 680 or 1,360 more Linux
guests than any other machine.)

The number 680 is derived by taking the current maximum number of physical
Crypto Express features per machine (16), configuring them all in CCA mode,
multiplying by the current maximum number of domains per feature (85)(*),
then dividing in half, with the idea being that each Linux guest would
benefit from the services of two CCA domains spread across two physical
Crypto Express features.

I think this last assumption is fairly arbitrary. A single Linux guest is
one kernel running within only one instance of the hypervisor (which may or
may not be nested). It's a singleton, inherently. In a production
environment you'd presumably have something more than singleton Linux
guests running particular workloads, at least if they're important
workloads. You pick up redundancy there. If a particular Linux guest is
offline for whatever reason, there's another handling the workload (or
ready to handle it), with its own Crypto Express domain.

You certainly could decide to add Crypto Express redundancy on a per guest
basis in addition to whole Linux guest redundancy, but if you're going to
measure the outer bound maximum number I don't think you ought to assume
"redundancy squared." It seems rather arbitrary to me that that's where you
draw that particular line.

There is no intrinsic limit to the number of Linux guests using
dm-crypt/LUKS2 encrypted volumes with clear keys.

You can also decide on a guest-by-guest basis whether to double up on
Crypto Express CCA domains or not, which would mean a current upper bound
limit somewhere between 680 and 1,360 Linux guests using CCA domains.
And/or you can decide how many Crypto Express features you want to
configure in another mode, notably EP11. If for example you configure two
Crypto Express features in EP11 mode, then there are up to 14 available for
CCA mode, supporting up to 1,190 Linux guests using protected key
dm-crypt/LUKS2 (up to 595 if you decide to double them all up, or somewhere
in between if you double up some of them).

Anyway, this is an interesting discussion! If you're pushing these limits
or at least forecast you will, let IBM know, officially.

(*) This particular number is 40 on IBM z14 ZR1, LinuxONE Rockhopper II,
and their predecessor models. Adjust the rest of the math accordingly for
these machine models.


Timothy Sipples
IT Architect Executive, Digital Asset & Other Industry Solutions, IBM Z &
LinuxONE


E-Mail: sipp...@sg.ibm.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-20 Thread Reinhard Buendgen

Marcy,

with in one CEC you cannot share an APQN (a specific domain in a 
specific adapter) in two active LPARs or guests (regradless) of the 
location of the two guests.


Is 680 guest too few? How much would you like to have?

As for letting the hypervisor do the disk encryption, this is easily 
possible for KVM today.


What kind of disks are you using in you z/VM guests dedicated disks 
(DASD or SCSI?) or mini disks?


-Reinhard

On 18.01.20 23:15, Marcy Cortes wrote:

I was talking about the CCA rpm package needed on Linux


Sent with BlackBerry Work
(www.blackberry.com)


From: Alan Altmark mailto:alan_altm...@us.ibm.com>>
Date: Saturday, Jan 18, 2020, 2:01 AM
To: LINUX-390@VM.MARIST.EDU 
mailto:LINUX-390@VM.MARIST.EDU>>
Subject: Re: [LINUX-390] Pervasive disk encryption questions


To be clear, a CCA is a crypto in Coprocessor mode. It is the only mode
that allows Linux or z/OS to load master keys without TKE, so keeping it
out of the picture isn’t going to work if you want to use ICSF to load
keys.

A (crypto, domain) pair can be online to only one LPAR at a time, but in
any case you cannot relocate a guest with APDED domains.

Regards,
Alan Altmark
IBM


On Jan 17, 2020, at 8:00 PM, Marcy Cortes 

wrote:


One more question I have and its probably more VM orientated.

Say we decide z/OS ICSF loads all the master keys for us (keeping CCA out

of the pic) .  Can a guest on VM1 use the same card/domain as a guest on
VM2 in another lpar provided they user the same MK?  Trying to figure out
HW requirements for fitting this into a GDPS 4 site where a guest can be
instantiated in lots of places (8 different lpars currently).

And those in the same cluster I'd still like to be able to LGR them.

PS.  Has IBM considered that maybe this data at rest encryption is better

handled at the VM layer?Current HW basically limits you to 760 guests
using it on z15 if you give 2 devices to each guest for redundancy, right?
(85 * 16 = 1360 / 2 ).

Marcy



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or

visit
https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M=YJ0apmefTqTIb9A_tsjLg_jZLBDQ7z30plCLJhj2AdA=jgDJvvKIlIt8nomhJ9ERSkPwWQVqjmaoeffEhIhwMSM=

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-18 Thread Marcy Cortes
I was talking about the CCA rpm package needed on Linux


Sent with BlackBerry Work
(www.blackberry.com)


From: Alan Altmark mailto:alan_altm...@us.ibm.com>>
Date: Saturday, Jan 18, 2020, 2:01 AM
To: LINUX-390@VM.MARIST.EDU 
mailto:LINUX-390@VM.MARIST.EDU>>
Subject: Re: [LINUX-390] Pervasive disk encryption questions


To be clear, a CCA is a crypto in Coprocessor mode. It is the only mode
that allows Linux or z/OS to load master keys without TKE, so keeping it
out of the picture isn’t going to work if you want to use ICSF to load
keys.

A (crypto, domain) pair can be online to only one LPAR at a time, but in
any case you cannot relocate a guest with APDED domains.

Regards,
Alan Altmark
IBM

> On Jan 17, 2020, at 8:00 PM, Marcy Cortes 
wrote:
>
> 
> One more question I have and its probably more VM orientated.
>
> Say we decide z/OS ICSF loads all the master keys for us (keeping CCA out
of the pic) .  Can a guest on VM1 use the same card/domain as a guest on
VM2 in another lpar provided they user the same MK?  Trying to figure out
HW requirements for fitting this into a GDPS 4 site where a guest can be
instantiated in lots of places (8 different lpars currently).
>
> And those in the same cluster I'd still like to be able to LGR them.
>
> PS.  Has IBM considered that maybe this data at rest encryption is better
handled at the VM layer?Current HW basically limits you to 760 guests
using it on z15 if you give 2 devices to each guest for redundancy, right?
(85 * 16 = 1360 / 2 ).
>
> Marcy
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M=YJ0apmefTqTIb9A_tsjLg_jZLBDQ7z30plCLJhj2AdA=jgDJvvKIlIt8nomhJ9ERSkPwWQVqjmaoeffEhIhwMSM=

>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-18 Thread Alan Altmark

To be clear, a CCA is a crypto in Coprocessor mode. It is the only mode
that allows Linux or z/OS to load master keys without TKE, so keeping it
out of the picture isn’t going to work if you want to use ICSF to load
keys.

A (crypto, domain) pair can be online to only one LPAR at a time, but in
any case you cannot relocate a guest with APDED domains.

Regards,
Alan Altmark
IBM

> On Jan 17, 2020, at 8:00 PM, Marcy Cortes 
wrote:
>
> 
> One more question I have and its probably more VM orientated.
>
> Say we decide z/OS ICSF loads all the master keys for us (keeping CCA out
of the pic) .  Can a guest on VM1 use the same card/domain as a guest on
VM2 in another lpar provided they user the same MK?  Trying to figure out
HW requirements for fitting this into a GDPS 4 site where a guest can be
instantiated in lots of places (8 different lpars currently).
>
> And those in the same cluster I'd still like to be able to LGR them.
>
> PS.  Has IBM considered that maybe this data at rest encryption is better
handled at the VM layer?Current HW basically limits you to 760 guests
using it on z15 if you give 2 devices to each guest for redundancy, right?
(85 * 16 = 1360 / 2 ).
>
> Marcy
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M=YJ0apmefTqTIb9A_tsjLg_jZLBDQ7z30plCLJhj2AdA=jgDJvvKIlIt8nomhJ9ERSkPwWQVqjmaoeffEhIhwMSM=

>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-17 Thread Marcy Cortes
680 guests I mean - can't type!


-Original Message-
From: Linux on 390 Port  On Behalf Of Marcy Cortes
Sent: Friday, January 17, 2020 5:00 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] Pervasive disk encryption questions


One more question I have and its probably more VM orientated. 

Say we decide z/OS ICSF loads all the master keys for us (keeping CCA out of 
the pic) .  Can a guest on VM1 use the same card/domain as a guest on VM2 in 
another lpar provided they user the same MK?  Trying to figure out HW 
requirements for fitting this into a GDPS 4 site where a guest can be 
instantiated in lots of places (8 different lpars currently).  

And those in the same cluster I'd still like to be able to LGR them.

PS.  Has IBM considered that maybe this data at rest encryption is better 
handled at the VM layer?Current HW basically limits you to 760 guests using 
it on z15 if you give 2 devices to each guest for redundancy, right?  (85 * 16 
= 1360 / 2 ).   

Marcy



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-17 Thread Marcy Cortes

One more question I have and its probably more VM orientated. 

Say we decide z/OS ICSF loads all the master keys for us (keeping CCA out of 
the pic) .  Can a guest on VM1 use the same card/domain as a guest on VM2 in 
another lpar provided they user the same MK?  Trying to figure out HW 
requirements for fitting this into a GDPS 4 site where a guest can be 
instantiated in lots of places (8 different lpars currently).  

And those in the same cluster I'd still like to be able to LGR them.

PS.  Has IBM considered that maybe this data at rest encryption is better 
handled at the VM layer?Current HW basically limits you to 760 guests using 
it on z15 if you give 2 devices to each guest for redundancy, right?  (85 * 16 
= 1360 / 2 ).   

Marcy



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-17 Thread Reinhard Buendgen

Hi,

a few comments on what was in an earlier Mail by Alan:

to set a master key in an EP11 adapter you always need a TKE, even if
you want to do it via z/OS, in which case a TKE must be connected to a
z/OS image.


Unless a domain of an adapter has been configured by the TKE to be only
manageble using signed commands, you can use panel.exe to manage the
master keys of adapter domains of which the adapter id and usage +
control domain id is attached to the guest. On z/VM guests, attaching a
(usage) domain to a guest with APDED always implies to also attache the
control domain to the guest.

The CCA tool panel.exe is meant as a simple key admin tool with limited
functionality that works best when operating on the default adapter and
default domain. It is described here:

https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.wskc.doc/wskc_c_utilities.html


You can provide arguments to address specific adapters but be aware that
this may be tricky because the way the tools from the CCA host package
counts adapters and the way the kernel identifies adapters may differ.

There is no easy way to define the domain to which a panel.exe command
shall be addressed. Some tricks are possible (e.g. setting the
environment variables CSU_DEFAULT_DOMAIN, or or setting the
/sys/bus/ap/ap_domain) but panel.exe cannot address a control domain
that is not equal to the usage domain.

In general, when you need to do master key management in more complex (>
single node) environments the TKE is the tool of choice. Besides being
more secure (access control via smart cards) and master key parts being
stored on smart cards, the smart cards with key parts allow you to
restore a master key (in case an crypto card got lost/zeroized/damaged).
In addition, the TKE can manage multiple crypto adapters connected to
multiple LPARs/guests, located on multiple CECs. TKE simplifies some
complex functions like distributing the same master key to a set of
adapter domains as needed for redundant/HA/DR set ups.


-Reinhard


(sorry for late replays but I am having some trouble with my mail
account or mailer :-(

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-16 Thread Reinhard Buendgen

Hi,

a few comments on what was said below:

to set a master key in an EP11 adapter you always need a TKE, even if 
you want to do it via z/OS, in which case a TKE must be connected to a 
z/OS image.



Unless a domain of an adapter has been configured by the TKE to be only 
manageble using signed commands, you can use panel.exe to manage the 
master keys of adapter domains of which the adapter id and usage + 
control domain id is attached to the guest. On z/VM guests, attaching a 
(usage) domain to a guest with APDED always implies to also attache the 
control domain to the guest.


The CCA tool panel.exe is meant as a simple key admin tool with limited 
functionality that works best when operating on the default adapter and 
default domain. It is described here:


https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.wskc.doc/wskc_c_utilities.html

You can provide arguments to address specific adapters but be aware that 
this may be tricky because the way the tools from the CCA host package 
counts adapters and the way the kernel identifies adapters may differ.


There is no easy way to define the domain to which a panel.exe command 
shall be addressed. Some tricks are possible (e.g. setting the 
environment variables CSU_DEFAULT_DOMAIN, or or setting the 
/sys/bus/ap/ap_domain) but I no not know whether panel.exe can actually 
address a control domain that is not equal to the usage domain. I will 
do some research to get you an answer on this.


In general, when you need to do master key management in more complex (> 
single node) environments the TKE is the tool of choice. Besides being 
more secure (access control via smart cards) and master key parts being 
stored on smart cards, the smart cards with key parts allow you to 
restore a master key (in case an crypto card got lost/zeroized/damaged). 
In addition, the TKE can manage multiple crypto adapters connected to 
multiple LPARs/guests, located on multiple CECs. TKE simplifies some 
complex functions like distributing the same master key to a set of 
adapter domains as needed for redundant/HA/DR set ups.



-Reinhard

On 15.01.20 21:20, Alan Altmark wrote:

On Saturday, 01/11/2020 at 01:25 GMT, marcy cortes
 wrote:

First, my understand of virtualizing crypto is that if any of the cards

are

defined as accelerators then CRYPTO APVIRT in the directory will give

linux an

accelerator.   If you want linux to have a coprocessor, you’d have to

dedicate

one.If you want a lot of servers to have coprocessors (more than the

HW

cards to dedicate), you’d get rid of the accelerators and make them all
coprocessors.  Is my understanding correct?

(I'm gonna take another run at this, hopefully with something more
coherent than my prior post.)

Not quite, no.  With recent updates, CP gives you control over what is in
the APVIRT pool.  If you don't give CP any instructions, then CP will
choose.  The only restriction is that all cryptos in the APVIRT pool must
be from the same generation and must be either in accelerator or
coprocessor (CCA) mode.  EP11 can't be used for APVIRT.   Only CLEAR key
operations can be performed using APVIRT.

For a guest to use SECURE keys, you must use APDEDICATED.  If the guest
wants to convert a SECURE key to a PROTECTED key for use with CPACF (as
you would want for an encrypted file system or other symmetric
encryption), then the guest must have access to APDEDICATED.


And to do the AES master key load, it has generally been done from z/OS

here.

   It looks like for my z/vm only boxes TKE is required, but I could use

the CCA

package to generate some for a test only scenario.

In coprocessor (CCA) mode, the keys can be loaded by TKE or by the
panel.exe app in the Linux CCA package.  In EP11 mode, TKE is required
(unless you have z/OS laying around).

I recommend TKE for z/VM environments due to the ability of any guest with
access to the domain to be able to manage it by default.  With TKE, the
domain can be configured such that only TKE-signed requests can alter the
domain configuration.

(And to answer Rick's question, yes, a secure key is indeed encrypted by
the 256-bit master key using AES.)


If I do want to try that CCA key load on a non prod box, I’m thinking I

would

have to dedicate all of the coprocessors to a Linux guest and create

them

there.  Then undedicate and then any guest with an APVIRT would find

valid

master keys and would then be able to “zkey generate” a secure key for

use in

each disk.

Am I on the right track?

Reinhard will have to comment on this.  I don't know if panel.exe lets you
load keys into domains other than the one Linux is actively using.  If you
have multiple APs (you should), you will have to load the keys for each AP
the Linux guest will have access to.  Switching domains may require
unloading and reloading the device driver after detaching and attaching
the "next" domain to the guest.

My understanding is that using TKE ensures that the guest cannot alter 

Re: Pervasive disk encryption questions

2020-01-16 Thread Reinhard Buendgen

good catch! I'll tell our ID department to have this corrected.

-Reinhard

On 16.01.20 03:03, Marcy Cortes wrote:

Hi Ingo.   Looking at this page... If its 85, why 00-5d in hex?   Isn't 5d = 93 
?

Marcy

On 1/13/20, 8:52 AM, "Linux on 390 Port on behalf of Ingo Adlung" 
 wrote:

 Hey Marcy,
 I'm not the crypto expert (Reinhard please jump in) but aren't we talking
 about crypto domain dedication? I.e. not dedicating complete cards ...
 don't know about z14/z15 but with z13 we supported up to 85 domains per
 LPAR per single adapter like described here:
 
 https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_c_crypto_virtual.html
 
 Best regards

 Ingo
 
 Linux on 390 Port  wrote on 13/01/2020 17:34:43:
 
 > From: Marcy Cortes 

 > To: LINUX-390@VM.MARIST.EDU
 > Date: 13/01/2020 17:35
 > Subject: [EXTERNAL] Re: [LINUX-390] Pervasive disk encryption questions
 > Sent by: Linux on 390 Port 
 >
 > Thanks!  Was hoping you'd respond.
 >
 > So essentially to do the disk encryption stuff documented here
 > https://www.ibm.com/support/knowledgecenter/en/linuxonibm/
 > com.ibm.linux.z.lxdc/lxdc_linuxonz.html
 > one has to dedicate to the guest.
 >
 > If I can put 16 cards on a z15, I'm essentially limited to 8 guests
 > per LPAR with the ability to do this.
 > (need redundancy so two per guest).Correct ?There's not a
 > way to dedicate, put master key on, then make it apvirt after that,
 correct?
 >
 > Marcy
 >
 >
 > -Original Message-
 > From: Linux on 390 Port  On Behalf Of
 > Reinhard Buendgen
 > Sent: Monday, January 13, 2020 7:19 AM
 > To: LINUX-390@VM.MARIST.EDU
 > Subject: Re: [LINUX-390] Pervasive disk encryption questions
 >
 > Hi,
 >
 > crypto adapter domains defined for z/VM guests with APVIRT are
 > restricted to perform clear key crypto operations (possibly including
 > random number generations). Regard less whether the backing adapters are
 > in accelerator mode or in CCA mode (AP-virt does not support adapters in
 > EP11 mode).
 > And if there are multiple backing adapters of different modes z/VM gives
 > priority to accelerator mode when choosing the type of the shared
 > virtual adapter.
 >
 > When you want to use secure key crypto you must define your crypto
 > adapter domain in the guest as dedicated adapter (APDED for z/VM guests,
 > for KVM guests currently only dedicated adapter domains are supported).
 > Dedicated adapter domains can be of any type: accelerator, CCA or EP11.
 > Only the CCA and EP11 types provide support for clear key crypto.
 >
 > To set/manage the master key of a dedicated CCA adapter domain assigned
 > to a guest there are multiple options
 > — connect the TKE to the catcher.exe daemon (part of the CCA host
 > package)  running on the Linux system and use the TKE to mange the
 > master key of the adapter domain belonging to the Linux guest (option
 > recommended for production use)
 > — use the panel.exe tool (part of the CCA host package) on the Linux
 > guest to set/manage the master key of the adapter domain belonging to
 > the Linux guest (this option is not recommended for production use, due
 > to some security limitations -- I like this option )
 > — use a z/OS System on the same CEC (or other Linux System) that has an
 > appropriate control domain setting. Using the z/OS system can go via
 > ICSF functions (which I guess are similar in function and security to
 > what the panel.exe tool provides) or a TKE connected to the z/OS system.
 > — use another Linux system on the same CEC that has an appropriate
 > control domain setting and do the management either vie panel.exe or TKE
 > (again TKE being recommended for production use).
 > There is no need for a special system to set master keys. Each system
 > can manage its own master keys. But if you choose to do so, say because
 > you want to use ICSF or panel.exe from a particularly secured system
 > then all you need is a system that has an arbitrary usage domain and
 > control domains configured to the domains you want to manage.
 > Unfortunately control domains cannot be freely configured for z/VM
 > guests. (z/VM sets the control domain to be equal to the usage domain).
 > So this option works only for LPARs and KVM guests. For z/VM guests you
 > may have to switch the adapter domains form the key mangement guest to
 > the actual working guests.
 >
 >
 > Reinhard
 >
 > --
 > For LINUX-390 subscribe / signoff / archive access instructions,
 > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 > 

Re: Pervasive disk encryption questions

2020-01-15 Thread Marcy Cortes
Hi Ingo.   Looking at this page... If its 85, why 00-5d in hex?   Isn't 5d = 93 
?

Marcy

On 1/13/20, 8:52 AM, "Linux on 390 Port on behalf of Ingo Adlung" 
 wrote:

Hey Marcy,
I'm not the crypto expert (Reinhard please jump in) but aren't we talking
about crypto domain dedication? I.e. not dedicating complete cards ...
don't know about z14/z15 but with z13 we supported up to 85 domains per
LPAR per single adapter like described here:


https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_c_crypto_virtual.html

Best regards
Ingo

Linux on 390 Port  wrote on 13/01/2020 17:34:43:

> From: Marcy Cortes 
> To: LINUX-390@VM.MARIST.EDU
> Date: 13/01/2020 17:35
> Subject: [EXTERNAL] Re: [LINUX-390] Pervasive disk encryption questions
> Sent by: Linux on 390 Port 
>
> Thanks!  Was hoping you'd respond.
>
> So essentially to do the disk encryption stuff documented here
> https://www.ibm.com/support/knowledgecenter/en/linuxonibm/
> com.ibm.linux.z.lxdc/lxdc_linuxonz.html
> one has to dedicate to the guest.
>
> If I can put 16 cards on a z15, I'm essentially limited to 8 guests
> per LPAR with the ability to do this.
> (need redundancy so two per guest).Correct ?There's not a
> way to dedicate, put master key on, then make it apvirt after that,
correct?
>
> Marcy
>
>
> -Original Message-
> From: Linux on 390 Port  On Behalf Of
> Reinhard Buendgen
> Sent: Monday, January 13, 2020 7:19 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: [LINUX-390] Pervasive disk encryption questions
>
> Hi,
>
> crypto adapter domains defined for z/VM guests with APVIRT are
> restricted to perform clear key crypto operations (possibly including
> random number generations). Regard less whether the backing adapters are
> in accelerator mode or in CCA mode (AP-virt does not support adapters in
> EP11 mode).
> And if there are multiple backing adapters of different modes z/VM gives
> priority to accelerator mode when choosing the type of the shared
> virtual adapter.
>
> When you want to use secure key crypto you must define your crypto
> adapter domain in the guest as dedicated adapter (APDED for z/VM guests,
> for KVM guests currently only dedicated adapter domains are supported).
> Dedicated adapter domains can be of any type: accelerator, CCA or EP11.
> Only the CCA and EP11 types provide support for clear key crypto.
>
> To set/manage the master key of a dedicated CCA adapter domain assigned
> to a guest there are multiple options
> — connect the TKE to the catcher.exe daemon (part of the CCA host
> package)  running on the Linux system and use the TKE to mange the
> master key of the adapter domain belonging to the Linux guest (option
> recommended for production use)
> — use the panel.exe tool (part of the CCA host package) on the Linux
> guest to set/manage the master key of the adapter domain belonging to
> the Linux guest (this option is not recommended for production use, due
> to some security limitations -- I like this option )
> — use a z/OS System on the same CEC (or other Linux System) that has an
> appropriate control domain setting. Using the z/OS system can go via
> ICSF functions (which I guess are similar in function and security to
> what the panel.exe tool provides) or a TKE connected to the z/OS system.
> — use another Linux system on the same CEC that has an appropriate
> control domain setting and do the management either vie panel.exe or TKE
> (again TKE being recommended for production use).
> There is no need for a special system to set master keys. Each system
> can manage its own master keys. But if you choose to do so, say because
> you want to use ICSF or panel.exe from a particularly secured system
> then all you need is a system that has an arbitrary usage domain and
> control domains configured to the domains you want to manage.
> Unfortunately control domains cannot be freely configured for z/VM
> guests. (z/VM sets the control domain to be equal to the usage domain).
> So this option works only for LPARs and KVM guests. For z/VM guests you
> may have to switch the adapter domains form the key mangement guest to
> the actual working guests.
>
>
> Reinhard
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> https://urldefense.proofpoint.com/v2/url?
>

u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIGaQ=jf_iaSHvJObTbx-

> siA1ZOg=jQ4IiHbzZ0l-wFKuUHMHvPIsi5vD8MZZCyI-
> 

Re: Pervasive disk encryption questions

2020-01-15 Thread Alan Altmark
On Saturday, 01/11/2020 at 01:25 GMT, marcy cortes 
 wrote:
> First, my understand of virtualizing crypto is that if any of the cards 
are
> defined as accelerators then CRYPTO APVIRT in the directory will give 
linux an
> accelerator.   If you want linux to have a coprocessor, you’d have to 
dedicate
> one.If you want a lot of servers to have coprocessors (more than the 
HW
> cards to dedicate), you’d get rid of the accelerators and make them all
> coprocessors.  Is my understanding correct?

(I'm gonna take another run at this, hopefully with something more 
coherent than my prior post.)

Not quite, no.  With recent updates, CP gives you control over what is in 
the APVIRT pool.  If you don't give CP any instructions, then CP will 
choose.  The only restriction is that all cryptos in the APVIRT pool must 
be from the same generation and must be either in accelerator or 
coprocessor (CCA) mode.  EP11 can't be used for APVIRT.   Only CLEAR key 
operations can be performed using APVIRT.

For a guest to use SECURE keys, you must use APDEDICATED.  If the guest 
wants to convert a SECURE key to a PROTECTED key for use with CPACF (as 
you would want for an encrypted file system or other symmetric 
encryption), then the guest must have access to APDEDICATED.

> And to do the AES master key load, it has generally been done from z/OS 
here.
>   It looks like for my z/vm only boxes TKE is required, but I could use 
the CCA
> package to generate some for a test only scenario.  

In coprocessor (CCA) mode, the keys can be loaded by TKE or by the 
panel.exe app in the Linux CCA package.  In EP11 mode, TKE is required 
(unless you have z/OS laying around).

I recommend TKE for z/VM environments due to the ability of any guest with 
access to the domain to be able to manage it by default.  With TKE, the 
domain can be configured such that only TKE-signed requests can alter the 
domain configuration.

(And to answer Rick's question, yes, a secure key is indeed encrypted by 
the 256-bit master key using AES.)

> If I do want to try that CCA key load on a non prod box, I’m thinking I 
would
> have to dedicate all of the coprocessors to a Linux guest and create 
them
> there.  Then undedicate and then any guest with an APVIRT would find 
valid
> master keys and would then be able to “zkey generate” a secure key for 
use in
> each disk.   
>
> Am I on the right track?

Reinhard will have to comment on this.  I don't know if panel.exe lets you 
load keys into domains other than the one Linux is actively using.  If you 
have multiple APs (you should), you will have to load the keys for each AP 
the Linux guest will have access to.  Switching domains may require 
unloading and reloading the device driver after detaching and attaching 
the "next" domain to the guest.

My understanding is that using TKE ensures that the guest cannot alter the 
master keys of the domain it is using without a properly signed request 
from TKE.  (No rogue key managers!)

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM Systems Lab Services
IBM Z Delivery Practice
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-13 Thread Reinhard Buendgen

Thanks for catching this: I wanted to say

Only the CCA and EP11 types provide support for secure key crypto.

... and support to transform secure keys into protected keys.

-Reinhard

On 13.01.20 18:22, R. J. Moore wrote:

Reinhard, one correction I think:

>> When you want to use secure key crypto you must define your crypto 
adapter domain in the guest as dedicated adapter (APDED for z/VM 
guests, for KVM guests currently only dedicated adapter domains are 
supported).
>> Dedicated adapter domains can be of any type: accelerator, CCA or 
EP11. Only the CCA and EP11 types provide support for clear key crypto.


Only the CCA and EP11 types provide support for protect key crypto.


Richard

z/VM Crypto.


On 13/01/2020 15:19, Reinhard Buendgen wrote:

Hi,

crypto adapter domains defined for z/VM guests with APVIRT are 
restricted to perform clear key crypto operations (possibly including 
random number generations). Regard less whether the backing adapters 
are in accelerator mode or in CCA mode (AP-virt does not support 
adapters in EP11 mode).
And if there are multiple backing adapters of different modes z/VM 
gives priority to accelerator mode when choosing the type of the 
shared virtual adapter.


When you want to use secure key crypto you must define your crypto 
adapter domain in the guest as dedicated adapter (APDED for z/VM 
guests, for KVM guests currently only dedicated adapter domains are 
supported).
Dedicated adapter domains can be of any type: accelerator, CCA or 
EP11. Only the CCA and EP11 types provide support for clear key crypto.


To set/manage the master key of a dedicated CCA adapter domain 
assigned to a guest there are multiple options
— connect the TKE to the catcher.exe daemon (part of the CCA host 
package)  running on the Linux system and use the TKE to mange the 
master key of the adapter domain belonging to the Linux guest (option 
recommended for production use)
— use the panel.exe tool (part of the CCA host package) on the Linux 
guest to set/manage the master key of the adapter domain belonging to 
the Linux guest (this option is not recommended for production use, 
due to some security limitations -- I like this option )
— use a z/OS System on the same CEC (or other Linux System) that has 
an appropriate control domain setting. Using the z/OS system can go 
via ICSF functions (which I guess are similar in function and 
security to what the panel.exe tool provides) or a TKE connected to 
the z/OS system.
— use another Linux system on the same CEC that has an appropriate 
control domain setting and do the management either vie panel.exe or 
TKE (again TKE being recommended for production use).
There is no need for a special system to set master keys. Each system 
can manage its own master keys. But if you choose to do so, say 
because you want to use ICSF or panel.exe from a particularly secured 
system then all you need is a system that has an arbitrary usage 
domain and control domains configured to the domains you want to 
manage. Unfortunately control domains cannot be freely configured for 
z/VM guests. (z/VM sets the control domain to be equal to the usage 
domain). So this option works only for LPARs and KVM guests. For z/VM 
guests you may have to switch the adapter domains form the key 
mangement guest to the actual working guests.



Reinhard

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 
or visit
https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=gWfH_UdD2c8k0h4gnfTSvBvnpNbusYa8zjPXy5D4rRk=ubCFUgz4uaul92kCGirud7OXbVsHXOUKzh7G8-6KS7o=1mxS_ns6QflzQSZqaRvQC4QSH8fs6cXaRnLPFyH03gk= 



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 
or visit

http://www2.marist.edu/htbin/wlvindex?LINUX-390


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-13 Thread R. J. Moore

Reinhard, one correction I think:

>> When you want to use secure key crypto you must define your crypto 
adapter domain in the guest as dedicated adapter (APDED for z/VM guests, 
for KVM guests currently only dedicated adapter domains are supported).
>> Dedicated adapter domains can be of any type: accelerator, CCA or 
EP11. Only the CCA and EP11 types provide support for clear key crypto.


Only the CCA and EP11 types provide support for protect key crypto.


Richard

z/VM Crypto.


On 13/01/2020 15:19, Reinhard Buendgen wrote:

Hi,

crypto adapter domains defined for z/VM guests with APVIRT are 
restricted to perform clear key crypto operations (possibly including 
random number generations). Regard less whether the backing adapters 
are in accelerator mode or in CCA mode (AP-virt does not support 
adapters in EP11 mode).
And if there are multiple backing adapters of different modes z/VM 
gives priority to accelerator mode when choosing the type of the 
shared virtual adapter.


When you want to use secure key crypto you must define your crypto 
adapter domain in the guest as dedicated adapter (APDED for z/VM 
guests, for KVM guests currently only dedicated adapter domains are 
supported).
Dedicated adapter domains can be of any type: accelerator, CCA or 
EP11. Only the CCA and EP11 types provide support for clear key crypto.


To set/manage the master key of a dedicated CCA adapter domain 
assigned to a guest there are multiple options
— connect the TKE to the catcher.exe daemon (part of the CCA host 
package)  running on the Linux system and use the TKE to mange the 
master key of the adapter domain belonging to the Linux guest (option 
recommended for production use)
— use the panel.exe tool (part of the CCA host package) on the Linux 
guest to set/manage the master key of the adapter domain belonging to 
the Linux guest (this option is not recommended for production use, 
due to some security limitations -- I like this option )
— use a z/OS System on the same CEC (or other Linux System) that has 
an appropriate control domain setting. Using the z/OS system can go 
via ICSF functions (which I guess are similar in function and security 
to what the panel.exe tool provides) or a TKE connected to the z/OS 
system.
— use another Linux system on the same CEC that has an appropriate 
control domain setting and do the management either vie panel.exe or 
TKE (again TKE being recommended for production use).
There is no need for a special system to set master keys. Each system 
can manage its own master keys. But if you choose to do so, say 
because you want to use ICSF or panel.exe from a particularly secured 
system then all you need is a system that has an arbitrary usage 
domain and control domains configured to the domains you want to 
manage. Unfortunately control domains cannot be freely configured for 
z/VM guests. (z/VM sets the control domain to be equal to the usage 
domain). So this option works only for LPARs and KVM guests. For z/VM 
guests you may have to switch the adapter domains form the key 
mangement guest to the actual working guests.



Reinhard

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 
or visit
https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=gWfH_UdD2c8k0h4gnfTSvBvnpNbusYa8zjPXy5D4rRk=ubCFUgz4uaul92kCGirud7OXbVsHXOUKzh7G8-6KS7o=1mxS_ns6QflzQSZqaRvQC4QSH8fs6cXaRnLPFyH03gk= 


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-13 Thread Reinhard Buendgen
Ingo is correct.  Each domain on an adapter functions as a separate HSM. 
So you have 85 times 16 HSMs on an enterprise class machine and 40 times 
16 HSMs on business class machine. Each of these HSM can be configured 
with a different master key.  - Having as many domains as LPARs is just 
coincidental so no LPAR domain association is required.


If you like redundancy, then for each guest you should dedicate two 
adapter domains (distributed over two adapters) and configure both 
domains with the same master key


-Reinhard

On 13.01.20 17:51, Ingo Adlung wrote:

Hey Marcy,
I'm not the crypto expert (Reinhard please jump in) but aren't we talking
about crypto domain dedication? I.e. not dedicating complete cards ...
don't know about z14/z15 but with z13 we supported up to 85 domains per
LPAR per single adapter like described here:

https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_c_crypto_virtual.html

Best regards
Ingo

Linux on 390 Port  wrote on 13/01/2020 17:34:43:


From: Marcy Cortes 
To: LINUX-390@VM.MARIST.EDU
Date: 13/01/2020 17:35
Subject: [EXTERNAL] Re: [LINUX-390] Pervasive disk encryption questions
Sent by: Linux on 390 Port 

Thanks!  Was hoping you'd respond.

So essentially to do the disk encryption stuff documented here
https://www.ibm.com/support/knowledgecenter/en/linuxonibm/
com.ibm.linux.z.lxdc/lxdc_linuxonz.html
one has to dedicate to the guest.

If I can put 16 cards on a z15, I'm essentially limited to 8 guests
per LPAR with the ability to do this.
(need redundancy so two per guest).Correct ?There's not a
way to dedicate, put master key on, then make it apvirt after that,

correct?

Marcy


-Original Message-
From: Linux on 390 Port  On Behalf Of
Reinhard Buendgen
Sent: Monday, January 13, 2020 7:19 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] Pervasive disk encryption questions

Hi,

crypto adapter domains defined for z/VM guests with APVIRT are
restricted to perform clear key crypto operations (possibly including
random number generations). Regard less whether the backing adapters are
in accelerator mode or in CCA mode (AP-virt does not support adapters in
EP11 mode).
And if there are multiple backing adapters of different modes z/VM gives
priority to accelerator mode when choosing the type of the shared
virtual adapter.

When you want to use secure key crypto you must define your crypto
adapter domain in the guest as dedicated adapter (APDED for z/VM guests,
for KVM guests currently only dedicated adapter domains are supported).
Dedicated adapter domains can be of any type: accelerator, CCA or EP11.
Only the CCA and EP11 types provide support for clear key crypto.

To set/manage the master key of a dedicated CCA adapter domain assigned
to a guest there are multiple options
— connect the TKE to the catcher.exe daemon (part of the CCA host
package)  running on the Linux system and use the TKE to mange the
master key of the adapter domain belonging to the Linux guest (option
recommended for production use)
— use the panel.exe tool (part of the CCA host package) on the Linux
guest to set/manage the master key of the adapter domain belonging to
the Linux guest (this option is not recommended for production use, due
to some security limitations -- I like this option )
— use a z/OS System on the same CEC (or other Linux System) that has an
appropriate control domain setting. Using the z/OS system can go via
ICSF functions (which I guess are similar in function and security to
what the panel.exe tool provides) or a TKE connected to the z/OS system.
— use another Linux system on the same CEC that has an appropriate
control domain setting and do the management either vie panel.exe or TKE
(again TKE being recommended for production use).
There is no need for a special system to set master keys. Each system
can manage its own master keys. But if you choose to do so, say because
you want to use ICSF or panel.exe from a particularly secured system
then all you need is a system that has an arbitrary usage domain and
control domains configured to the domains you want to manage.
Unfortunately control domains cannot be freely configured for z/VM
guests. (z/VM sets the control domain to be equal to the usage domain).
So this option works only for LPARs and KVM guests. For z/VM guests you
may have to switch the adapter domains form the key mangement guest to
the actual working guests.


Reinhard

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or

visit

https://urldefense.proofpoint.com/v2/url?


u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIGaQ=jf_iaSHvJObTbx-


siA1ZOg=jQ4IiHbzZ0l-wFKuUHMHvPIsi5vD8MZZCyI-
y49pWL0=DhEPjijzZHzxFUR5Ocah1MuFFKk-0-wj639ZIZ9EjFo=vIEO-
HPz83_EsRxjBWYxTWa_wZKC7Qa5SEl0hBZZbJE=


Re: Pervasive disk encryption questions

2020-01-13 Thread Ingo Adlung
Hey Marcy,
I'm not the crypto expert (Reinhard please jump in) but aren't we talking
about crypto domain dedication? I.e. not dedicating complete cards ...
don't know about z14/z15 but with z13 we supported up to 85 domains per
LPAR per single adapter like described here:

https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_c_crypto_virtual.html

Best regards
Ingo

Linux on 390 Port  wrote on 13/01/2020 17:34:43:

> From: Marcy Cortes 
> To: LINUX-390@VM.MARIST.EDU
> Date: 13/01/2020 17:35
> Subject: [EXTERNAL] Re: [LINUX-390] Pervasive disk encryption questions
> Sent by: Linux on 390 Port 
>
> Thanks!  Was hoping you'd respond.
>
> So essentially to do the disk encryption stuff documented here
> https://www.ibm.com/support/knowledgecenter/en/linuxonibm/
> com.ibm.linux.z.lxdc/lxdc_linuxonz.html
> one has to dedicate to the guest.
>
> If I can put 16 cards on a z15, I'm essentially limited to 8 guests
> per LPAR with the ability to do this.
> (need redundancy so two per guest).Correct ?There's not a
> way to dedicate, put master key on, then make it apvirt after that,
correct?
>
> Marcy
>
>
> -Original Message-
> From: Linux on 390 Port  On Behalf Of
> Reinhard Buendgen
> Sent: Monday, January 13, 2020 7:19 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: [LINUX-390] Pervasive disk encryption questions
>
> Hi,
>
> crypto adapter domains defined for z/VM guests with APVIRT are
> restricted to perform clear key crypto operations (possibly including
> random number generations). Regard less whether the backing adapters are
> in accelerator mode or in CCA mode (AP-virt does not support adapters in
> EP11 mode).
> And if there are multiple backing adapters of different modes z/VM gives
> priority to accelerator mode when choosing the type of the shared
> virtual adapter.
>
> When you want to use secure key crypto you must define your crypto
> adapter domain in the guest as dedicated adapter (APDED for z/VM guests,
> for KVM guests currently only dedicated adapter domains are supported).
> Dedicated adapter domains can be of any type: accelerator, CCA or EP11.
> Only the CCA and EP11 types provide support for clear key crypto.
>
> To set/manage the master key of a dedicated CCA adapter domain assigned
> to a guest there are multiple options
> — connect the TKE to the catcher.exe daemon (part of the CCA host
> package)  running on the Linux system and use the TKE to mange the
> master key of the adapter domain belonging to the Linux guest (option
> recommended for production use)
> — use the panel.exe tool (part of the CCA host package) on the Linux
> guest to set/manage the master key of the adapter domain belonging to
> the Linux guest (this option is not recommended for production use, due
> to some security limitations -- I like this option )
> — use a z/OS System on the same CEC (or other Linux System) that has an
> appropriate control domain setting. Using the z/OS system can go via
> ICSF functions (which I guess are similar in function and security to
> what the panel.exe tool provides) or a TKE connected to the z/OS system.
> — use another Linux system on the same CEC that has an appropriate
> control domain setting and do the management either vie panel.exe or TKE
> (again TKE being recommended for production use).
> There is no need for a special system to set master keys. Each system
> can manage its own master keys. But if you choose to do so, say because
> you want to use ICSF or panel.exe from a particularly secured system
> then all you need is a system that has an arbitrary usage domain and
> control domains configured to the domains you want to manage.
> Unfortunately control domains cannot be freely configured for z/VM
> guests. (z/VM sets the control domain to be equal to the usage domain).
> So this option works only for LPARs and KVM guests. For z/VM guests you
> may have to switch the adapter domains form the key mangement guest to
> the actual working guests.
>
>
> Reinhard
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> https://urldefense.proofpoint.com/v2/url?
>
u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIGaQ=jf_iaSHvJObTbx-

> siA1ZOg=jQ4IiHbzZ0l-wFKuUHMHvPIsi5vD8MZZCyI-
> y49pWL0=DhEPjijzZHzxFUR5Ocah1MuFFKk-0-wj639ZIZ9EjFo=vIEO-
> HPz83_EsRxjBWYxTWa_wZKC7Qa5SEl0hBZZbJE=
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> https://urldefense.proofpoint.com/v2/url?
>
u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390=DwIGaQ=jf_iaSHvJObTbx-

> siA1ZOg=jQ4IiHbzZ0l-wFKuUHMHvPIsi5vD8MZZCyI-
> y49pWL0=DhEPjijzZHzxFUR5Ocah1MuFFKk-0-wj639ZIZ9EjFo=vIEO-
> 

Re: Pervasive disk encryption questions

2020-01-13 Thread Reinhard Buendgen

Hi,

with our Crypto HW we distinguish from a security dimension
- clear key crypto (keys reside in plain text in memory)
- secure key crypto (keys are wrapped by (amster) keys hidden in a 
Crypto adapter aka HSM)
- protected key crypto (keys are wrapped by keys hidden in firmware not 
accessible by OS)
there are both symmetrical and asymmetrical crypto algorithms for all 
three dimensions.


As for HW implementation
- CPACF (instructions inside the CPU)
   -- supports both symmetrical and asymmetrical (ECC) algorithms
   -- supports clear key and protected key crypto
- CryptoExpress adapters (an adapter card plugged into a CEC)
   -- supports both symmetrical and asymmetrical algorithms (the CCA 
adapter does so for both clear and secure keys).
  -- supports clear key (in accelerator and  CCA mode) and secure key 
(CCA and EP11 mode) crypto
As for acceleration (of clear key algorithms) it only makes sense to use 
the HW acceleration inside the CPU (i.e. CPACF) to accelerate "fast" 
algorithms like symmetric crypto and hashes. -- It does not make sense 
to send such requests to a CryptoExpress adapter because the I/O 
overhead would eat up all acceleration gains.
For expensive algorithms (like RSA or DH) it worth while to send request 
to a CryptoExpress adapter (in accelerator or CCA mode) to accelerate 
the computation.
Not so expensive asymmetric algorithms (ECC) could be computed both on 
an Adapter and inside the CPU. Since z15 the fastest way to compute EC 
crypto is to use a new CPACF function.

Reinhard

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-13 Thread Marcy Cortes
Thanks!  Was hoping you'd respond.  

So essentially to do the disk encryption stuff documented here 
https://www.ibm.com/support/knowledgecenter/en/linuxonibm/com.ibm.linux.z.lxdc/lxdc_linuxonz.html
one has to dedicate to the guest.

If I can put 16 cards on a z15, I'm essentially limited to 8 guests per LPAR 
with the ability to do this.
(need redundancy so two per guest).Correct ?There's not a way to 
dedicate, put master key on, then make it apvirt after that, correct?

Marcy


-Original Message-
From: Linux on 390 Port  On Behalf Of Reinhard Buendgen
Sent: Monday, January 13, 2020 7:19 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] Pervasive disk encryption questions

Hi,

crypto adapter domains defined for z/VM guests with APVIRT are 
restricted to perform clear key crypto operations (possibly including 
random number generations). Regard less whether the backing adapters are 
in accelerator mode or in CCA mode (AP-virt does not support adapters in 
EP11 mode).
And if there are multiple backing adapters of different modes z/VM gives 
priority to accelerator mode when choosing the type of the shared 
virtual adapter.

When you want to use secure key crypto you must define your crypto 
adapter domain in the guest as dedicated adapter (APDED for z/VM guests, 
for KVM guests currently only dedicated adapter domains are supported).
Dedicated adapter domains can be of any type: accelerator, CCA or EP11. 
Only the CCA and EP11 types provide support for clear key crypto.

To set/manage the master key of a dedicated CCA adapter domain assigned 
to a guest there are multiple options
— connect the TKE to the catcher.exe daemon (part of the CCA host 
package)  running on the Linux system and use the TKE to mange the 
master key of the adapter domain belonging to the Linux guest (option 
recommended for production use)
— use the panel.exe tool (part of the CCA host package) on the Linux 
guest to set/manage the master key of the adapter domain belonging to 
the Linux guest (this option is not recommended for production use, due 
to some security limitations -- I like this option )
— use a z/OS System on the same CEC (or other Linux System) that has an 
appropriate control domain setting. Using the z/OS system can go via 
ICSF functions (which I guess are similar in function and security to 
what the panel.exe tool provides) or a TKE connected to the z/OS system.
— use another Linux system on the same CEC that has an appropriate 
control domain setting and do the management either vie panel.exe or TKE 
(again TKE being recommended for production use).
There is no need for a special system to set master keys. Each system 
can manage its own master keys. But if you choose to do so, say because 
you want to use ICSF or panel.exe from a particularly secured system 
then all you need is a system that has an arbitrary usage domain and 
control domains configured to the domains you want to manage. 
Unfortunately control domains cannot be freely configured for z/VM 
guests. (z/VM sets the control domain to be equal to the usage domain). 
So this option works only for LPARs and KVM guests. For z/VM guests you 
may have to switch the adapter domains form the key mangement guest to 
the actual working guests.


Reinhard

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-13 Thread Reinhard Buendgen

Hi,

crypto adapter domains defined for z/VM guests with APVIRT are 
restricted to perform clear key crypto operations (possibly including 
random number generations). Regard less whether the backing adapters are 
in accelerator mode or in CCA mode (AP-virt does not support adapters in 
EP11 mode).
And if there are multiple backing adapters of different modes z/VM gives 
priority to accelerator mode when choosing the type of the shared 
virtual adapter.


When you want to use secure key crypto you must define your crypto 
adapter domain in the guest as dedicated adapter (APDED for z/VM guests, 
for KVM guests currently only dedicated adapter domains are supported).
Dedicated adapter domains can be of any type: accelerator, CCA or EP11. 
Only the CCA and EP11 types provide support for clear key crypto.


To set/manage the master key of a dedicated CCA adapter domain assigned 
to a guest there are multiple options
— connect the TKE to the catcher.exe daemon (part of the CCA host 
package)  running on the Linux system and use the TKE to mange the 
master key of the adapter domain belonging to the Linux guest (option 
recommended for production use)
— use the panel.exe tool (part of the CCA host package) on the Linux 
guest to set/manage the master key of the adapter domain belonging to 
the Linux guest (this option is not recommended for production use, due 
to some security limitations -- I like this option )
— use a z/OS System on the same CEC (or other Linux System) that has an 
appropriate control domain setting. Using the z/OS system can go via 
ICSF functions (which I guess are similar in function and security to 
what the panel.exe tool provides) or a TKE connected to the z/OS system.
— use another Linux system on the same CEC that has an appropriate 
control domain setting and do the management either vie panel.exe or TKE 
(again TKE being recommended for production use).
There is no need for a special system to set master keys. Each system 
can manage its own master keys. But if you choose to do so, say because 
you want to use ICSF or panel.exe from a particularly secured system 
then all you need is a system that has an arbitrary usage domain and 
control domains configured to the domains you want to manage. 
Unfortunately control domains cannot be freely configured for z/VM 
guests. (z/VM sets the control domain to be equal to the usage domain). 
So this option works only for LPARs and KVM guests. For z/VM guests you 
may have to switch the adapter domains form the key mangement guest to 
the actual working guests.



Reinhard

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-11 Thread Alan Altmark

Yes, that’s how it works. The math needed to encrypt and decrypt using the
public and private keys is performed on the crypto card.

The data being protected by that encrypted message is the symmetric key
used to encrypt the application data. That encryption is done on the CPU,
not the cryptos, but I think only for protected or clear keys, not
encrypted keys.

Those private keys may themselves be encrypted by the master key.

However, domains selected to be APVIRT cryptos are clear key only since
multiple guests share them.

APDEDICATED crypto domains are given to specific users and those are fully
functional, including having encrypted keys.   Each domain on a crypto can
have a unique master key.  For redundancy, it is intended that the same
domain on multiple cryptos will have the same master key.

When z/OS isn’t on the CPC, encrypted keys can only be used when the crypto
is in EP11 mode.  TKE and Linux EP11 daemon work together to load the
master keys.

For APVIRT, the cryptos can be either accelerators or coprocessors. Choose
one.

I, too, have been researching the details of encrypted file systems on
Linux.  I know what I want Linux to do, but I don’t yet know if he will do
it.

Regards,
Alan Altmark
IBM

> On Jan 11, 2020, at 1:41 PM, VANDER WOUDE, PETER
 wrote:
>
> Configuring and having available a crypto facility engine as an
accelerator is primarily for use in the speeding up of the SSL initial
negotiation, as that is using RSA public/private keys for handshake, which
then results in a symmetric key for a cipher have both agreed upon.  This
handshaking is one of the more expensive (cpu wise) of SSL handling, and
using the accelerator helps speed that up, especially as RSA key lengths
get larger.
>
> Note: My knowledge of this comes from my work on z/OS and studying of how
the crypto engines in accelerator or co-processor mode work.  Not sure if
the usage on the Linux on Z builds operates the same way.
>
> Regards,
> Peter
>
>
> -Original Message-
> From: Linux on 390 Port  On Behalf Of Rick Troth
> Sent: Friday, January 10, 2020 8:52 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Pervasive disk encryption questions
>
> CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and know
the content is safe.
>
> My understanding of the cards is that they're more of a trust anchor than
an accelerator. What I mean is ... differentiate symmetric crypto from
asymmetric crypto. Symmetric crypto (think AES) is handled by the main
processor, right? (This is where Brian or Alan will chime in, and please
do.) So why shuttle the work to a co-processor? Symmetric crypto is faster
(much!) than asymmetric.
>
> Who uses AES as a master key? The concept of "master" should be
asymmetric, not symmetric. Symmetric keys are the kind you create for a
session and then discard. But ... this is pervasive ... okay ... keys ...
and store them. Where? On the card? Okay, but still, doesn't mean the
*processing* of symmetric gets done there.
>
> Last I knew (haven't read the PoOp for z15), the CPU didn't have
asymmetric instructions (think RSA). Asymmetric crypto is slower (much!)
than symmetric, so one could conceivably shuttle that work to a daughter
card and get a win (or at least parity!).
>
> But there's another point: "trust" is all about asymmetric keys, where
you have a PUBLIC half and a PRIVATE half to the pair. So the card can hold
the private half (and prove itself against the public half, like for an SSL
cert) or can hold the public half (and serve to confirm a third party, like
a remote web site SSL cert).
>
> Not sure I'm splainin it well. Solly.
>
> But this could be old news. IBMers? What's new?
>
> -- R; <><
>
>
>> On 1/10/20 8:24 PM, marcy cortes wrote:
>> Cross posted to Linux-390 and IBMVM
>>
>>
>> First, my understand of virtualizing crypto is that if any of the
>> cards are defined as accelerators then CRYPTO APVIRT in the directory
will give linux
>> an accelerator.   If you want linux to have a coprocessor, you’d have to
>> dedicate one.If you want a lot of servers to have coprocessors (more
>> than the HW cards to dedicate), you’d get rid of the accelerators and
>> make them all coprocessors.  Is my understanding correct?
>>
>> And to do the AES master key load, it has generally been done from z/OS
>> here.   It looks like for my z/vm only boxes TKE is required, but I
could
>> use the CCA package to generate some for a test only scenario.
>>
>> If I do want to try that CCA key load on a non prod box, I’m thinking
>> I would have to dedicate all of the coprocessors to a Linux guest and
>> creat

Re: Pervasive disk encryption questions

2020-01-11 Thread VANDER WOUDE, PETER
Configuring and having available a crypto facility engine as an accelerator is 
primarily for use in the speeding up of the SSL initial negotiation, as that is 
using RSA public/private keys for handshake, which then results in a symmetric 
key for a cipher have both agreed upon.  This handshaking is one of the more 
expensive (cpu wise) of SSL handling, and using the accelerator helps speed 
that up, especially as RSA key lengths get larger.

Note: My knowledge of this comes from my work on z/OS and studying of how the 
crypto engines in accelerator or co-processor mode work.  Not sure if the usage 
on the Linux on Z builds operates the same way.

Regards,
Peter


-Original Message-
From: Linux on 390 Port  On Behalf Of Rick Troth
Sent: Friday, January 10, 2020 8:52 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Pervasive disk encryption questions

CAUTION: This email originated from outside the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.

My understanding of the cards is that they're more of a trust anchor than an 
accelerator. What I mean is ... differentiate symmetric crypto from asymmetric 
crypto. Symmetric crypto (think AES) is handled by the main processor, right? 
(This is where Brian or Alan will chime in, and please do.) So why shuttle the 
work to a co-processor? Symmetric crypto is faster (much!) than asymmetric.

Who uses AES as a master key? The concept of "master" should be asymmetric, not 
symmetric. Symmetric keys are the kind you create for a session and then 
discard. But ... this is pervasive ... okay ... keys ... and store them. Where? 
On the card? Okay, but still, doesn't mean the *processing* of symmetric gets 
done there.

Last I knew (haven't read the PoOp for z15), the CPU didn't have asymmetric 
instructions (think RSA). Asymmetric crypto is slower (much!) than symmetric, 
so one could conceivably shuttle that work to a daughter card and get a win (or 
at least parity!).

But there's another point: "trust" is all about asymmetric keys, where you have 
a PUBLIC half and a PRIVATE half to the pair. So the card can hold the private 
half (and prove itself against the public half, like for an SSL cert) or can 
hold the public half (and serve to confirm a third party, like a remote web 
site SSL cert).

Not sure I'm splainin it well. Solly.

But this could be old news. IBMers? What's new?

-- R; <><


On 1/10/20 8:24 PM, marcy cortes wrote:
> Cross posted to Linux-390 and IBMVM
>
>
> First, my understand of virtualizing crypto is that if any of the
> cards are defined as accelerators then CRYPTO APVIRT in the directory will 
> give linux
> an accelerator.   If you want linux to have a coprocessor, you’d have to
> dedicate one.If you want a lot of servers to have coprocessors (more
> than the HW cards to dedicate), you’d get rid of the accelerators and
> make them all coprocessors.  Is my understanding correct?
>
>  And to do the AES master key load, it has generally been done from z/OS
> here.   It looks like for my z/vm only boxes TKE is required, but I could
> use the CCA package to generate some for a test only scenario.
>
> If I do want to try that CCA key load on a non prod box, I’m thinking
> I would have to dedicate all of the coprocessors to a Linux guest and
> create them there.  Then undedicate and then any guest with an APVIRT
> would find valid master keys and would then be able to “zkey generate”
> a secure key for use in each disk.
>
> Am I on the right track?
>
> Marcy
>

--
-- R; <><


--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww2.marist.edu%2Fhtbin%2Fwlvindex%3FLINUX-390data=02%7C01%7Cpwoude%40HARRISTEETER.COM%7C1346250474f843a2aceb08d796398ae7%7C95fcefa54abb413594fc717122fd83d5%7C0%7C0%7C637143046209518057sdata=YWO7eRzQPGxXKYWn7u%2BJrl35XCQtMhpbyyg4wgrBykQ%3Dreserved=0



CONFIDENTIALITY NOTICE: This e-mail message and all attachments may contain 
legally privileged and confidential information intended solely for the use of 
the addressee. If the reader of this message is not the intended recipient, any 
reading, dissemination, distribution, copying, or other use of this message or 
its attachments is prohibited. If you have received this communication in 
error, please notify the sender immediately by telephone at 704.844.3100 and 
delete this message and all copies and backups thereof. Thank you.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Pervasive disk encryption questions

2020-01-10 Thread Rick Troth
My understanding of the cards is that they're more of a trust anchor
than an accelerator. What I mean is ... differentiate symmetric crypto
from asymmetric crypto. Symmetric crypto (think AES) is handled by the
main processor, right? (This is where Brian or Alan will chime in, and
please do.) So why shuttle the work to a co-processor? Symmetric crypto
is faster (much!) than asymmetric.

Who uses AES as a master key? The concept of "master" should be
asymmetric, not symmetric. Symmetric keys are the kind you create for a
session and then discard. But ... this is pervasive ... okay ... keys
... and store them. Where? On the card? Okay, but still, doesn't mean
the *processing* of symmetric gets done there.

Last I knew (haven't read the PoOp for z15), the CPU didn't have
asymmetric instructions (think RSA). Asymmetric crypto is slower (much!)
than symmetric, so one could conceivably shuttle that work to a daughter
card and get a win (or at least parity!).

But there's another point: "trust" is all about asymmetric keys, where
you have a PUBLIC half and a PRIVATE half to the pair. So the card can
hold the private half (and prove itself against the public half, like
for an SSL cert) or can hold the public half (and serve to confirm a
third party, like a remote web site SSL cert).

Not sure I'm splainin it well. Solly.

But this could be old news. IBMers? What's new?

-- R; <><


On 1/10/20 8:24 PM, marcy cortes wrote:
> Cross posted to Linux-390 and IBMVM
>
>
> First, my understand of virtualizing crypto is that if any of the cards are
> defined as accelerators then CRYPTO APVIRT in the directory will give linux
> an accelerator.   If you want linux to have a coprocessor, you’d have to
> dedicate one.If you want a lot of servers to have coprocessors (more
> than the HW cards to dedicate), you’d get rid of the accelerators and make
> them all coprocessors.  Is my understanding correct?
>
>  And to do the AES master key load, it has generally been done from z/OS
> here.   It looks like for my z/vm only boxes TKE is required, but I could
> use the CCA package to generate some for a test only scenario.
>
> If I do want to try that CCA key load on a non prod box, I’m thinking I
> would have to dedicate all of the coprocessors to a Linux guest and create
> them there.  Then undedicate and then any guest with an APVIRT would find
> valid master keys and would then be able to “zkey generate” a secure key
> for use in each disk.
>
> Am I on the right track?
>
> Marcy
>

-- 
-- R; <><


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390