Re: [PATCH] kexec: ppc64: print help to stdout instead of stderr

2023-11-16 Thread Srikar Dronamraju
* Aditya Gupta  [2023-11-16 14:11:37]:

> Currently 'kexec --help' on powerpc64 prints the generic help/usage to
> stdout, and the powerpc64 specific options to stderr
> 
> That is, if the stdout of 'kexec --help' is redirected to some file,
> some of the help options will not be redirected, and instead printed on
> the terminal/stderr:
> 
> [root@machine kexec-tools]# kexec --help > /tmp/out
>  --command-line= command line to append.
>  --append= same as --command-line.
>  --ramdisk= Initial RAM disk.
>  --initrd= same as --ramdisk.
>  --devicetreeblob= Specify device tree blob file.
>  Not applicable while using
> --kexec-file-syscall.
>  --dtb= same as --devicetreeblob.
> elf support is still broken
>  --elf64-core-headers Prepare core headers in ELF64 format
>  --dt-no-old-root Do not reuse old kernel root= param.
>   while creating flatten device tree.
> 
> Fix this inconsistency by writing powerpc64 specific options to stdout,
> similar to the generic 'kexec --help'
> 
> With the proposed changes, it is like this (nothing printed to stderr):
> 
> [root@machine kexec-tools]# ./build/sbin/kexec --help > /tmp/out
> 
> Reported-by: Srikar Dronamraju 
> Signed-off-by: Aditya Gupta 
> ---

Thanks Aditya for looking into this.

-- 
Thanks and Regards
Srikar Dronamraju

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 4/7] kexec_file, arm64: print out debugging message if required

2023-11-16 Thread Baoquan He
On 11/16/23 at 12:58am, kernel test robot wrote:
> Hi Baoquan,
> 
> kernel test robot noticed the following build warnings:
> 
> [auto build test WARNING on arm64/for-next/core]
> [also build test WARNING on tip/x86/core powerpc/next powerpc/fixes 
> linus/master v6.7-rc1 next-20231115]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
> 
> url:
> https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/kexec_file-add-kexec_file-flag-to-control-debug-printing/20231114-234003
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git 
> for-next/core
> patch link:
> https://lore.kernel.org/r/20231114153253.241262-5-bhe%40redhat.com
> patch subject: [PATCH 4/7] kexec_file, arm64: print out debugging message if 
> required
> config: arm64-randconfig-001-20231115 
> (https://download.01.org/0day-ci/archive/20231116/202311160022.qm6xjysy-...@intel.com/config)
> compiler: aarch64-linux-gcc (GCC) 13.2.0
> reproduce (this is a W=1 build): 
> (https://download.01.org/0day-ci/archive/20231116/202311160022.qm6xjysy-...@intel.com/reproduce)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version 
> of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot 
> | Closes: 
> https://lore.kernel.org/oe-kbuild-all/202311160022.qm6xjysy-...@intel.com/
> 
> All warnings (new ones prefixed by >>):
> 
>arch/arm64/kernel/machine_kexec.c: In function '_kexec_image_info':
> >> arch/arm64/kernel/machine_kexec.c:35:23: warning: unused variable 'i' 
> >> [-Wunused-variable]
>   35 | unsigned long i;
>  |   ^

Yes, this is an obvious one missed, will fix and update in new post,
thanks.


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [RFC V2] IMA Log Snapshotting Design Proposal

2023-11-16 Thread Paul Moore
On Thu, Nov 16, 2023 at 5:41 PM Stefan Berger  wrote:
> On 11/16/23 17:07, Paul Moore wrote:
> > On Tue, Nov 14, 2023 at 1:58 PM Stefan Berger  wrote:
> >> On 11/14/23 13:36, Sush Shringarputale wrote:
> >>> On 11/13/2023 10:59 AM, Stefan Berger wrote:
>  On 10/19/23 14:49, Tushar Sugandhi wrote:
> > ===
> > | Introduction |
> > ===
> > This document provides a detailed overview of the proposed Kernel
> > feature IMA log snapshotting.  It describes the motivation behind the
> > proposal, the problem to be solved, a detailed solution design with
> > examples, and describes the changes to be made in the clients/services
> > which are part of remote-attestation system.  This is the 2nd version
> > of the proposal.  The first version is present here[1].
> >
> > Table of Contents:
> > --
> > A. Motivation and Background
> > B. Goals and Non-Goals
> >   B.1 Goals
> >   B.2 Non-Goals
> > C. Proposed Solution
> >   C.1 Solution Summary
> >   C.2 High-level Work-flow
> > D. Detailed Design
> >   D.1 Snapshot Aggregate Event
> >   D.2 Snapshot Triggering Mechanism
> >   D.3 Choosing A Persistent Storage Location For Snapshots
> >   D.4 Remote-Attestation Client/Service-side Changes
> >   D.4.a Client-side Changes
> >   D.4.b Service-side Changes
> > E. Example Walk-through
> > F. Other Design Considerations
> > G. References
> >
> 
>  Userspace applications will have to know
>  a) where are the shard files?
> >>> We describe the file storage location choices in section D.3, but user
> >>> applications will have to query the well-known location described there.
>  b) how do I read the shard files while locking out the producer of the
>  shard files?
> 
>  IMO, this will require a well known config file and a locking method
>  (flock) so that user space applications can work together in this new
>  environment. The lock could be defined in the config file or just be
>  the config file itself.
> >>> The flock is a good idea for co-ordination between UM clients. While
> >>> the Kernel cannot enforce any access in this way, any UM process that
> >>> is planning on triggering the snapshot mechanism should follow that
> >>> protocol.  We will ensure we document that as the best-practices in
> >>> the patch series.
> >>
> >> It's more than 'best practices'. You need a well-known config file with
> >> well-known config options in it.
> >>
> >> All clients that were previously just trying to read new bytes from the
> >> IMA log cannot do this anymore in the presence of a log shard producer
> >> but have to also learn that a new log shard has been produced so they
> >> need to figure out the new position in the log where to read from. So
> >> maybe a counter in a config file should indicate to the log readers that
> >> a new log has been produced -- otherwise they would have to monitor all
> >> the log shard files or the log shard file's size.
> >
> > If a counter is needed, I would suggest placing it somewhere other
> > than the config file so that we can enforce limited write access to
> > the config file.
> >
> > Regardless, I imagine there are a few ways one could synchronize
> > various userspace applications such that they see a consistent view of
> > the decomposed log state, and the good news is that the approach
> > described here is opt-in from a userspace perspective.  If the
>
> A FUSE filesystem that stitches together the log shards from one or
> multiple files + IMA log file(s) could make this approach transparent
> for as long as log shards are not thrown away. Presumably it (or root)
> could bind-mount its files over the two IMA log files.
>
> > userspace does not fully support IMA log snapshotting then it never
> > needs to trigger it and the system behaves as it does today; on the
>
> I don't think individual applications should trigger it , instead some
> dedicated background process running on a machine would do that every n
> log entries or so and possibly offer the FUSE filesystem at the same
> time. In either case, once any application triggers it, all either have
> to know how to deal with the shards or FUSE would make it completely
> transparent.

Yes, performing a snapshot is a privileged operation which I expect
would be done and managed by a dedicated daemon running on the system.

-- 
paul-moore.com



Re: [RFC V2] IMA Log Snapshotting Design Proposal

2023-11-16 Thread Stefan Berger




On 11/16/23 17:07, Paul Moore wrote:

On Tue, Nov 14, 2023 at 1:58 PM Stefan Berger  wrote:

On 11/14/23 13:36, Sush Shringarputale wrote:

On 11/13/2023 10:59 AM, Stefan Berger wrote:

On 10/19/23 14:49, Tushar Sugandhi wrote:

===
| Introduction |
===
This document provides a detailed overview of the proposed Kernel
feature IMA log snapshotting.  It describes the motivation behind the
proposal, the problem to be solved, a detailed solution design with
examples, and describes the changes to be made in the clients/services
which are part of remote-attestation system.  This is the 2nd version
of the proposal.  The first version is present here[1].

Table of Contents:
--
A. Motivation and Background
B. Goals and Non-Goals
  B.1 Goals
  B.2 Non-Goals
C. Proposed Solution
  C.1 Solution Summary
  C.2 High-level Work-flow
D. Detailed Design
  D.1 Snapshot Aggregate Event
  D.2 Snapshot Triggering Mechanism
  D.3 Choosing A Persistent Storage Location For Snapshots
  D.4 Remote-Attestation Client/Service-side Changes
  D.4.a Client-side Changes
  D.4.b Service-side Changes
E. Example Walk-through
F. Other Design Considerations
G. References



Userspace applications will have to know
a) where are the shard files?

We describe the file storage location choices in section D.3, but user
applications will have to query the well-known location described there.

b) how do I read the shard files while locking out the producer of the
shard files?

IMO, this will require a well known config file and a locking method
(flock) so that user space applications can work together in this new
environment. The lock could be defined in the config file or just be
the config file itself.

The flock is a good idea for co-ordination between UM clients. While
the Kernel cannot enforce any access in this way, any UM process that
is planning on triggering the snapshot mechanism should follow that
protocol.  We will ensure we document that as the best-practices in
the patch series.


It's more than 'best practices'. You need a well-known config file with
well-known config options in it.

All clients that were previously just trying to read new bytes from the
IMA log cannot do this anymore in the presence of a log shard producer
but have to also learn that a new log shard has been produced so they
need to figure out the new position in the log where to read from. So
maybe a counter in a config file should indicate to the log readers that
a new log has been produced -- otherwise they would have to monitor all
the log shard files or the log shard file's size.


If a counter is needed, I would suggest placing it somewhere other
than the config file so that we can enforce limited write access to
the config file.

Regardless, I imagine there are a few ways one could synchronize
various userspace applications such that they see a consistent view of
the decomposed log state, and the good news is that the approach
described here is opt-in from a userspace perspective.  If the


A FUSE filesystem that stitches together the log shards from one or 
multiple files + IMA log file(s) could make this approach transparent 
for as long as log shards are not thrown away. Presumably it (or root) 
could bind-mount its files over the two IMA log files.



userspace does not fully support IMA log snapshotting then it never
needs to trigger it and the system behaves as it does today; on the


I don't think individual applications should trigger it , instead some 
dedicated background process running on a machine would do that every n 
log entries or so and possibly offer the FUSE filesystem at the same 
time. In either case, once any application triggers it, all either have 
to know how to deal with the shards or FUSE would make it completely 
transparent.



other hand, if the userspace has been updated it can make use of the
new functionality to better manage the size of the IMA measurement
log.





Re: [RFC V2] IMA Log Snapshotting Design Proposal

2023-11-16 Thread Paul Moore
On Tue, Oct 31, 2023 at 3:15 PM Mimi Zohar  wrote:
> On Thu, 2023-10-19 at 11:49 -0700, Tushar Sugandhi wrote:
>
> [...]
> > ---
> > | C.1 Solution Summary|
> > ---
> > To achieve the goals described in the section above, we propose the
> > following changes to the IMA subsystem.
> >
> >  a. The IMA log from Kernel memory will be offloaded to some
> > persistent storage disk to keep the system running reliably
> > without facing memory pressure.
> > More details, alternate approaches considered etc. are present
> > in section "D.3 Choices for Storing Snapshots" below.
> >
> >  b. The IMA log will be divided into multiple chunks (snapshots).
> > Each snapshot would be a delta between the two instances when
> > the log was offloaded from memory to the persistent storage
> > disk.
> >
> >  c. Some UM process (like a remote-attestation-client) will be
> > responsible for writing the IMA log snapshot to the disk.
> >
> >  d. The same UM process would be responsible for triggering the IMA
> > log snapshot.
> >
> >  e. There will be a well-known location for storing the IMA log
> > snapshots on the disk.  It will be non-trivial for UM processes
> > to change that location after booting into the Kernel.
> >
> >  f. A new event, "snapshot_aggregate", will be computed and measured
> > in the IMA log as part of this feature.  It should help the
> > remote-attestation client/service to benefit from the IMA log
> > snapshot feature.
> > The "snapshot_aggregate" event is described in more details in
> > section "D.1 Snapshot Aggregate Event" below.
> >
> >  g. If the existing remote-attestation client/services do not change
> > to benefit from this feature or do not trigger the snapshot,
> > the Kernel will continue to have it's current functionality of
> > maintaining an in-memory full IMA log.
> >
> > Additionally, the remote-attestation client/services need to be updated
> > to benefit from the IMA log snapshot feature.  These proposed changes
> >
> > are described in section "D.4 Remote-Attestation Client/Service Side
> > Changes" below, but their implementation is out of scope for this
> > proposal.
>
> As previously said on v1,
>This design seems overly complex and requires synchronization between the
>"snapshot" record and exporting the records from the measurement list. 
> [...]
>
>Concerns:
>- Pausing extending the measurement list.
>
> Nothing has changed in terms of the complexity or in terms of pausing
> the measurement list.   Pausing the measurement list is a non starter.

The measurement list would only need to be paused for the amount of
time it would require to generate the snapshot_aggregate entry, which
should be minimal and only occurs when a privileged userspace requests
a snapshot operation.  The snapshot remains opt-in functionality, and
even then there is the possibility that the kernel could reject the
snapshot request if generating the snapshot_aggregate entry was deemed
too costly (as determined by the kernel) at that point in time.

> Userspace can already export the IMA measurement list(s) via the
> securityfs {ascii,binary}_runtime_measurements file(s) and do whatever
> it wants with it.  All that is missing in the kernel is the ability to
> trim the measurement list, which doesn't seem all that complicated.

>From my perspective what has been presented is basically just trimming
the in-memory measurement log, the additional complexity (which really
doesn't look that bad IMO) is there to ensure robustness in the face
of an unreliable userspace (processes die, get killed, etc.) and to
establish a new, transitive root of trust in the newly trimmed
in-memory log.

I suppose one could simplify things greatly by having a design where
userspace  captures the measurement log and then writes the number of
measurement records to trim from the start of the measurement log to a
sysfs file and the kernel acts on that.  You could do this with, or
without, the snapshot_aggregate entry concept; in fact that could be
something that was controlled by userspace, e.g. write the number of
lines and a flag to indicate if a snapshot_aggregate was desired to
the sysfs file.  I can't say I've thought it all the way through to
make sure there are no gotchas, but I'm guessing that is about as
simple as one can get.

If there is something else you had in mind, Mimi, please share the
details.  This is a very real problem we are facing and we want to
work to get a solution upstream.

-- 
paul-moore.com



Re: [RFC V2] IMA Log Snapshotting Design Proposal

2023-11-16 Thread Paul Moore
On Tue, Nov 14, 2023 at 1:58 PM Stefan Berger  wrote:
> On 11/14/23 13:36, Sush Shringarputale wrote:
> > On 11/13/2023 10:59 AM, Stefan Berger wrote:
> >> On 10/19/23 14:49, Tushar Sugandhi wrote:
> >>> ===
> >>> | Introduction |
> >>> ===
> >>> This document provides a detailed overview of the proposed Kernel
> >>> feature IMA log snapshotting.  It describes the motivation behind the
> >>> proposal, the problem to be solved, a detailed solution design with
> >>> examples, and describes the changes to be made in the clients/services
> >>> which are part of remote-attestation system.  This is the 2nd version
> >>> of the proposal.  The first version is present here[1].
> >>>
> >>> Table of Contents:
> >>> --
> >>> A. Motivation and Background
> >>> B. Goals and Non-Goals
> >>>  B.1 Goals
> >>>  B.2 Non-Goals
> >>> C. Proposed Solution
> >>>  C.1 Solution Summary
> >>>  C.2 High-level Work-flow
> >>> D. Detailed Design
> >>>  D.1 Snapshot Aggregate Event
> >>>  D.2 Snapshot Triggering Mechanism
> >>>  D.3 Choosing A Persistent Storage Location For Snapshots
> >>>  D.4 Remote-Attestation Client/Service-side Changes
> >>>  D.4.a Client-side Changes
> >>>  D.4.b Service-side Changes
> >>> E. Example Walk-through
> >>> F. Other Design Considerations
> >>> G. References
> >>>
> >>
> >> Userspace applications will have to know
> >> a) where are the shard files?
> > We describe the file storage location choices in section D.3, but user
> > applications will have to query the well-known location described there.
> >> b) how do I read the shard files while locking out the producer of the
> >> shard files?
> >>
> >> IMO, this will require a well known config file and a locking method
> >> (flock) so that user space applications can work together in this new
> >> environment. The lock could be defined in the config file or just be
> >> the config file itself.
> > The flock is a good idea for co-ordination between UM clients. While
> > the Kernel cannot enforce any access in this way, any UM process that
> > is planning on triggering the snapshot mechanism should follow that
> > protocol.  We will ensure we document that as the best-practices in
> > the patch series.
>
> It's more than 'best practices'. You need a well-known config file with
> well-known config options in it.
>
> All clients that were previously just trying to read new bytes from the
> IMA log cannot do this anymore in the presence of a log shard producer
> but have to also learn that a new log shard has been produced so they
> need to figure out the new position in the log where to read from. So
> maybe a counter in a config file should indicate to the log readers that
> a new log has been produced -- otherwise they would have to monitor all
> the log shard files or the log shard file's size.

If a counter is needed, I would suggest placing it somewhere other
than the config file so that we can enforce limited write access to
the config file.

Regardless, I imagine there are a few ways one could synchronize
various userspace applications such that they see a consistent view of
the decomposed log state, and the good news is that the approach
described here is opt-in from a userspace perspective.  If the
userspace does not fully support IMA log snapshotting then it never
needs to trigger it and the system behaves as it does today; on the
other hand, if the userspace has been updated it can make use of the
new functionality to better manage the size of the IMA measurement
log.

-- 
paul-moore.com



Re: [PATCH v7 02/13] Documentation/x86: Secure Launch kernel documentation

2023-11-16 Thread ross . philipson

On 11/12/23 10:07 AM, Alyssa Ross wrote:

+Load-time Integrity
+---
+
+It is critical to understand what load-time integrity establishes about a
+system and what is assumed, i.e. what is being trusted. Load-time integrity is
+when a trusted entity, i.e. an entity with an assumed integrity, takes an
+action to assess an entity being loaded into memory before it is used. A
+variety of mechanisms may be used to conduct the assessment, each with
+different properties. A particular property is whether the mechanism creates an
+evidence of the assessment. Often either cryptographic signature checking or
+hashing are the common assessment operations used.
+
+A signature checking assessment functions by requiring a representation of the
+accepted authorities and uses those representations to assess if the entity has
+been signed by an accepted authority. The benefit to this process is that
+assessment process includes an adjudication of the assessment. The drawbacks
+are that 1) the adjudication is susceptible to tampering by the Trusted
+Computing Base (TCB), 2) there is no evidence to assert that an untampered
+adjudication was completed, and 3) the system must be an active participant in
+the key management infrastructure.
+
+A cryptographic hashing assessment does not adjudicate the assessment but
+instead, generates evidence of the assessment to be adjudicated independently.
+The benefits to this approach is that the assessment may be simple such that it
+may be implemented in an immutable mechanism, e.g. in hardware.  Additionally,
+it is possible for the adjudication to be conducted where it cannot be tampered
+with by the TCB. The drawback is that a compromised environment will be allowed
+to execute until an adjudication can be completed.
+
+Ultimately, load-time integrity provides confidence that the correct entity was
+loaded and in the absence of a run-time integrity mechanism assumes, i.e.
+trusts, that the entity will never become corrupted.


I'm somewhat familiar with this area, but not massively (so probably the
sort of person this documentation is aimed at!), and this was the only
section of the documentation I had trouble understanding.

The thing that confused me was that the first time I read this, I was
thinking that a hashing assessment would be comparing the generated hash
to a baked-in known good hash, simliar to how e.g. a verity root hash
might be specified on the kernel command line, baked in to the OS image.
This made me wonder why it wasn't considered to be adjudicated during
assessment.  Upon reading it a second time, I now understand that what
it's actually talking about is generating a hash, but not comparing it
automatically against anything, and making it available for external
adjudication somehow.


Yes there is nothing baked into an image in the way we currently use is. 
I take what you call a hashing assessment to be what we would call 
remote attestation where an independent agent assesses the state of the 
measured launch. This is indeed one of the primary use cases. There is 
another use case closer to the baked in one where secrets on the system 
are sealed to the TPM using a known good PCR configuration. Only by 
launching and attaining that known good state can the secrets be unsealed.




I don't know if the approach I first thought of is used in early boot
at all, but it might be worth contrasting the cryptographic hashing
assessment described here with it, because I imagine that I'm not going
to be the only reader who's more used to thinking about integrity
slightly later in the boot process where adjudicating based on a static
hash is common, and who's mind is going to go to that when they read
about a "cryptographic hashing assessment".

The rest of the documentation was easy to understand and very helpful to
understanding system launch integrity.  Thanks!


I am glad it was helpful. We will revisit the section that caused 
confusion and see if we can make it clearer.


Thank you,
Ross

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCHv3 00/14] x86/tdx: Add kexec support

2023-11-16 Thread Baoquan He
On 11/16/23 at 10:17pm, Baoquan He wrote:
> On 11/16/23 at 03:56pm, Kirill A. Shutemov wrote:
> > On Thu, Nov 16, 2023 at 08:10:47PM +0800, Baoquan He wrote:
> > > On 11/15/23 at 03:00pm, Kirill A. Shutemov wrote:
> > > > The patchset adds bits and pieces to get kexec (and crashkernel) work on
> > > > TDX guest.
> > > 
> > > I finally got a machine of intel-eaglestream-spr as host and built a
> > > tdx guest to give it a shot, the kexec reboot is working very well,
> > > while kdump kernel always failed to boot up. I only built kernel and
> > > installed it on tdx guest.
> > > --
> > > [1.422500] Run /init as init process
> > > [1.423073] Failed to execute /init (error -2)
> > > [1.423759] Run /sbin/init as init process
> > > [1.424370] Run /etc/init as init process
> > > [1.424969] Run /bin/init as init process
> > > [1.425588] Run /bin/sh as init process
> > > [1.426150] Kernel panic - not syncing: No working init found.  Try 
> > > passing init= option to kernel. See Linux 
> > > Documentation/admin-guide/init.rst for guidance.
> > > [1.428122] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 
> > > 6.7.0-rc1-00014-gbdba31ba3cec #3
> > > [1.429232] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 
> > > unknown 2/2/2022
> > > [1.430328] Call Trace:
> > > [1.430717]  
> > > [1.431041]  dump_stack_lvl+0x33/0x50
> > > [1.431581]  panic+0x324/0x340
> > > [1.432037]  ? __pfx_kernel_init+0x10/0x10
> > > [1.432629]  kernel_init+0x174/0x1c0
> > > [1.433149]  ret_from_fork+0x2d/0x50
> > > [1.433690]  ? __pfx_kernel_init+0x10/0x10
> > > [1.434277]  ret_from_fork_asm+0x1b/0x30
> > > [1.434850]  
> > > [1.435345] Kernel Offset: disabled
> > > [1.439216] Rebooting in 10 seconds..
> > > qemu-kvm: cpus are not resettable, terminating
> > 
> > Could you shared your kernel config and details about your setup (qemu
> > command, kernel command line, ...)?
> 
> We followed tdx-tools README to setup the environment and built host and
> guest kernel, qemu command is as below. I copied the
> tdx-tools/build/rhel-9/intel-mvp-tdx-kernel/tdx-base.config to the
> latest upstream linxu kernel then execute 'make olddefconfig'. Because
> your patchset can't be applied to the stable kernel with the 731
> patches.
> 
> cd /home/root/tdx-tools
> ./start-qemu.sh -i /home/root/guest_tdx.qcow2 -b grub

This is the qemu command when execute above line of command, just for
your reference if you happen to not take this way.

[root@intel-eaglestream-spr-03 tdx-tools]# ./start-qemu.sh -i 
/home/root/guest_tdx.qcow2 -b grub
WARN: Using HVC console for grub, could not accept key input in grub menu
=
Guest Image   : /home/root/guest_tdx.qcow2
Kernel binary : 
OVMF  : /usr/share/qemu/OVMF.fd
VM Type   : td
CPUS  : 1
Boot type : grub
Monitor port  : 9001
Enable vsock  : false
Enable debug  : false
Console   : HVC
=
Remapping CTRL-C to CTRL-]
Launch VM:
/usr/libexec/qemu-kvm -accel kvm -name process=tdxvm,debug-threads=on -m 2G 
-vga none -monitor pty -no-hpet -nodefaults -drive 
file=/home/root/guest_tdx.qcow2,if=virtio,format=qcow2 -monitor 
telnet:127.0.0.1:9001,server,nowait -bios /usr/share/qemu/OVMF.fd -object 
tdx-guest,sept-ve-disable=on,id=tdx -object 
memory-backend-memfd-private,id=ram1,size=2G -cpu host,-kvm-steal-time,pmu=off 
-machine 
q35,kernel_irqchip=split,confidential-guest-support=tdx,memory-backend=ram1 
-device virtio-net-pci,netdev=mynet0 -netdev 
user,id=mynet0,net=10.0.2.0/24,dhcpstart=10.0.2.15,hostfwd=tcp::10026-:22 -smp 
1 -chardev 
stdio,id=mux,mux=on,logfile=/home/root/tdx-tools/vm_log_2023-11-16T0658.log 
-device virtio-serial,romfile= -device virtconsole,chardev=mux -monitor 
chardev:mux -serial chardev:mux -nographic
char device redirected to /dev/pts/1 (label compat_monitor0)


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCHv3 00/14] x86/tdx: Add kexec support

2023-11-16 Thread Kirill A. Shutemov
On Thu, Nov 16, 2023 at 08:10:47PM +0800, Baoquan He wrote:
> On 11/15/23 at 03:00pm, Kirill A. Shutemov wrote:
> > The patchset adds bits and pieces to get kexec (and crashkernel) work on
> > TDX guest.
> 
> I finally got a machine of intel-eaglestream-spr as host and built a
> tdx guest to give it a shot, the kexec reboot is working very well,
> while kdump kernel always failed to boot up. I only built kernel and
> installed it on tdx guest.
> --
> [1.422500] Run /init as init process
> [1.423073] Failed to execute /init (error -2)
> [1.423759] Run /sbin/init as init process
> [1.424370] Run /etc/init as init process
> [1.424969] Run /bin/init as init process
> [1.425588] Run /bin/sh as init process
> [1.426150] Kernel panic - not syncing: No working init found.  Try 
> passing init= option to kernel. See Linux Documentation/admin-guide/init.rst 
> for guidance.
> [1.428122] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 
> 6.7.0-rc1-00014-gbdba31ba3cec #3
> [1.429232] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 
> unknown 2/2/2022
> [1.430328] Call Trace:
> [1.430717]  
> [1.431041]  dump_stack_lvl+0x33/0x50
> [1.431581]  panic+0x324/0x340
> [1.432037]  ? __pfx_kernel_init+0x10/0x10
> [1.432629]  kernel_init+0x174/0x1c0
> [1.433149]  ret_from_fork+0x2d/0x50
> [1.433690]  ? __pfx_kernel_init+0x10/0x10
> [1.434277]  ret_from_fork_asm+0x1b/0x30
> [1.434850]  
> [1.435345] Kernel Offset: disabled
> [1.439216] Rebooting in 10 seconds..
> qemu-kvm: cpus are not resettable, terminating

Could you shared your kernel config and details about your setup (qemu
command, kernel command line, ...)?


-- 
  Kiryl Shutsemau / Kirill A. Shutemov

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCHv3 00/14] x86/tdx: Add kexec support

2023-11-16 Thread Baoquan He
On 11/15/23 at 03:00pm, Kirill A. Shutemov wrote:
> The patchset adds bits and pieces to get kexec (and crashkernel) work on
> TDX guest.

I finally got a machine of intel-eaglestream-spr as host and built a
tdx guest to give it a shot, the kexec reboot is working very well,
while kdump kernel always failed to boot up. I only built kernel and
installed it on tdx guest.
--
[1.422500] Run /init as init process
[1.423073] Failed to execute /init (error -2)
[1.423759] Run /sbin/init as init process
[1.424370] Run /etc/init as init process
[1.424969] Run /bin/init as init process
[1.425588] Run /bin/sh as init process
[1.426150] Kernel panic - not syncing: No working init found.  Try passing 
init= option to kernel. See Linux Documentation/admin-guide/init.rst for 
guidance.
[1.428122] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 
6.7.0-rc1-00014-gbdba31ba3cec #3
[1.429232] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 
2/2/2022
[1.430328] Call Trace:
[1.430717]  
[1.431041]  dump_stack_lvl+0x33/0x50
[1.431581]  panic+0x324/0x340
[1.432037]  ? __pfx_kernel_init+0x10/0x10
[1.432629]  kernel_init+0x174/0x1c0
[1.433149]  ret_from_fork+0x2d/0x50
[1.433690]  ? __pfx_kernel_init+0x10/0x10
[1.434277]  ret_from_fork_asm+0x1b/0x30
[1.434850]  
[1.435345] Kernel Offset: disabled
[1.439216] Rebooting in 10 seconds..
qemu-kvm: cpus are not resettable, terminating


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH] kexec: ppc64: print help to stdout instead of stderr

2023-11-16 Thread Simon Horman
On Thu, Nov 16, 2023 at 02:11:37PM +0530, Aditya Gupta wrote:
> Currently 'kexec --help' on powerpc64 prints the generic help/usage to
> stdout, and the powerpc64 specific options to stderr
> 
> That is, if the stdout of 'kexec --help' is redirected to some file,
> some of the help options will not be redirected, and instead printed on
> the terminal/stderr:
> 
> [root@machine kexec-tools]# kexec --help > /tmp/out
>  --command-line= command line to append.
>  --append= same as --command-line.
>  --ramdisk= Initial RAM disk.
>  --initrd= same as --ramdisk.
>  --devicetreeblob= Specify device tree blob file.
>  Not applicable while using
> --kexec-file-syscall.
>  --dtb= same as --devicetreeblob.
> elf support is still broken
>  --elf64-core-headers Prepare core headers in ELF64 format
>  --dt-no-old-root Do not reuse old kernel root= param.
>   while creating flatten device tree.
> 
> Fix this inconsistency by writing powerpc64 specific options to stdout,
> similar to the generic 'kexec --help'
> 
> With the proposed changes, it is like this (nothing printed to stderr):
> 
> [root@machine kexec-tools]# ./build/sbin/kexec --help > /tmp/out
> 
> Reported-by: Srikar Dronamraju 
> Signed-off-by: Aditya Gupta 

Thanks Aditya,

applied.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH] kexec: ppc64: print help to stdout instead of stderr

2023-11-16 Thread Aditya Gupta
Currently 'kexec --help' on powerpc64 prints the generic help/usage to
stdout, and the powerpc64 specific options to stderr

That is, if the stdout of 'kexec --help' is redirected to some file,
some of the help options will not be redirected, and instead printed on
the terminal/stderr:

[root@machine kexec-tools]# kexec --help > /tmp/out
 --command-line= command line to append.
 --append= same as --command-line.
 --ramdisk= Initial RAM disk.
 --initrd= same as --ramdisk.
 --devicetreeblob= Specify device tree blob file.
 Not applicable while using
--kexec-file-syscall.
 --dtb= same as --devicetreeblob.
elf support is still broken
 --elf64-core-headers Prepare core headers in ELF64 format
 --dt-no-old-root Do not reuse old kernel root= param.
  while creating flatten device tree.

Fix this inconsistency by writing powerpc64 specific options to stdout,
similar to the generic 'kexec --help'

With the proposed changes, it is like this (nothing printed to stderr):

[root@machine kexec-tools]# ./build/sbin/kexec --help > /tmp/out

Reported-by: Srikar Dronamraju 
Signed-off-by: Aditya Gupta 
---
 kexec/arch/ppc64/kexec-elf-ppc64.c| 22 +++---
 kexec/arch/ppc64/kexec-ppc64.c|  6 +++---
 kexec/arch/ppc64/kexec-zImage-ppc64.c |  2 +-
 3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/kexec/arch/ppc64/kexec-elf-ppc64.c 
b/kexec/arch/ppc64/kexec-elf-ppc64.c
index 01d045f..bdcfd20 100644
--- a/kexec/arch/ppc64/kexec-elf-ppc64.c
+++ b/kexec/arch/ppc64/kexec-elf-ppc64.c
@@ -482,15 +482,15 @@ int elf_ppc64_load(int argc, char **argv, const char 
*buf, off_t len,
 
 void elf_ppc64_usage(void)
 {
-   fprintf(stderr, " --command-line= command line to 
append.\n");
-   fprintf(stderr, " --append= same as 
--command-line.\n");
-   fprintf(stderr, " --ramdisk= Initial RAM disk.\n");
-   fprintf(stderr, " --initrd= same as --ramdisk.\n");
-   fprintf(stderr, " --devicetreeblob= Specify device tree 
blob file.\n");
-   fprintf(stderr, " ");
-   fprintf(stderr, "Not applicable while using --kexec-file-syscall.\n");
-   fprintf(stderr, " --reuse-cmdline Use kernel command line from 
running system.\n");
-   fprintf(stderr, " --dtb= same as --devicetreeblob.\n");
-
-   fprintf(stderr, "elf support is still broken\n");
+   printf(" --command-line= command line to append.\n");
+   printf(" --append= same as --command-line.\n");
+   printf(" --ramdisk= Initial RAM disk.\n");
+   printf(" --initrd= same as --ramdisk.\n");
+   printf(" --devicetreeblob= Specify device tree blob 
file.\n");
+   printf(" ");
+   printf("Not applicable while using --kexec-file-syscall.\n");
+   printf(" --reuse-cmdline Use kernel command line from running 
system.\n");
+   printf(" --dtb= same as --devicetreeblob.\n");
+
+   printf("elf support is still broken\n");
 }
diff --git a/kexec/arch/ppc64/kexec-ppc64.c b/kexec/arch/ppc64/kexec-ppc64.c
index 611809f..19f17cb 100644
--- a/kexec/arch/ppc64/kexec-ppc64.c
+++ b/kexec/arch/ppc64/kexec-ppc64.c
@@ -910,9 +910,9 @@ int file_types = sizeof(file_type) / sizeof(file_type[0]);
 
 void arch_usage(void)
 {
-   fprintf(stderr, " --elf64-core-headers Prepare core headers in 
ELF64 format\n");
-   fprintf(stderr, " --dt-no-old-root Do not reuse old kernel root= 
param.\n" \
-   "  while creating flatten device 
tree.\n");
+   printf(" --elf64-core-headers Prepare core headers in ELF64 
format\n");
+   printf(" --dt-no-old-root Do not reuse old kernel root= param.\n"
+  "  while creating flatten device tree.\n");
 }
 
 struct arch_options_t arch_options = {
diff --git a/kexec/arch/ppc64/kexec-zImage-ppc64.c 
b/kexec/arch/ppc64/kexec-zImage-ppc64.c
index e946205..7f45751 100644
--- a/kexec/arch/ppc64/kexec-zImage-ppc64.c
+++ b/kexec/arch/ppc64/kexec-zImage-ppc64.c
@@ -180,5 +180,5 @@ int zImage_ppc64_load(FILE *file, int UNUSED(argc), char 
**UNUSED(argv),
 
 void zImage_ppc64_usage(void)
 {
-   fprintf(stderr, "zImage support is still broken\n");
+   printf("zImage support is still broken\n");
 }
-- 
2.39.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec