Re: [PLUG] VirtualBox Problems

2022-02-11 Thread John Jason Jordan
I discovered that I can export from the old VirtualBox to OVF format
just by manually changing the name of the file it will create from .OVA
to .OVF. And the new VirtualBox on the Latitude will import it, except
that at the end the import fails:

VirtualBox Error
Failed to import appliance ... Windows 10.ovf
Detail
Result code NS_ERROR_INVAL ID_ARG (0x80070057)

I tried it both ways - exporting as OVA and exporting as OVF, and the
new VirtualBox will try to import either one, but at the end always
gives the above error message.

I also tried copying the Windows 10.vdi file to ~/VirtualBox/VDI on
the new computer (where I had to create the folder first), but the new
VirtualBox does not see the machine. I also tried putting it in
~/VirtualBox VMs, which is the default folder that the installer
created on the new computer, but it still is not visible.

At the end I decided to just reinstall Windows 10 in VirtualBox on the
new computer from the .ISO that I still have. It took a couple of hours
of swearing, but I finally got it installed, except when I launch it I
get a screen with a lot of little squares instead of a Windows 10
screen. In VirtualBox the icon for Windows 10 appears and the Settings
to the right also appear, and appear correct, but I can't figure out
how to fix the video problem.

TomasK  dijo:
>So, you need OVF to import it?
>Try export Virtual Appliance from menu then select one of the OVF
>formats.

The format is not selectable, but (see above) the file that it will
create appears in a box below and you can just edit the name to .ova or
.ovf.

>I do not bother - just:
>  rsync -a "VirtualBox VMs" .../home/$USER/
>  rsync -a .config/VirtualBox .../home/$USER/.config/
>
>Then start your new virtualbox and you should see everything as it used
>to be.

I don't understand the rsync commands above. The .vdi files are on one
computer and they need to be copied to a new computer, and the
computers are not directly connected. It is faster and easier just to
copy files from the old computer to a USB drive and plug the drive into
the new computer. And I already did that on the new computer, both to
~/.VirtualBox/VDI (which I had to create) and to ~/VirtualBox VMs
(created by the installer of 6.1.26).

Tomorrow I'm going to scour the VitualBox forums hoping to find step by
step 'for dummies' instructions for how to install Windows 10 from an
ISO.

I should add that this is not a crucial matter. My old computer has
Windows 2000, Windows XP and Windows 10, and none of them have been
run for at least a couple of years; Windows 2000 for at least five
years. If I had any brains I'd just forget about it.

I should add that the old computer has Xubuntu 20.04.2 with VirtualBox
6.1.16 and the new computer has Xubuntu 21.10 with VirtualBox 6.1.26.


Re: [PLUG] VirtualBox Problems

2022-02-11 Thread TomasK
So, you need OVF to import it?
Try export Virtual Appliance from menu then select one of the OVF
formats.

I do not bother - just:
  rsync -a "VirtualBox VMs" .../home/$USER/
  rsync -a .config/VirtualBox .../home/$USER/.config/

Then start your new virtualbox and you should see everything as it used
to be.

That being said - I had Windows deactivated when it detected new CPU.
So, I keep it pinned down in the config. That is different story
though.

-Tomas

On Thu, 2022-02-10 at 22:43 -0800, John Jason Jordan wrote:
> I have had VirtualBox on my Lenovo laptop and its predecessors for
> many
> years, and it always just works. Now that I have a new computer I
> decided to install it there also, and copy the VDI file for Windows
> 10
> over there also (I have multiple licenses). In the past this always
> worked when I needed to make a fresh install.
> 
> Installing VirtualBox on the new computer was no problem, but when I
> tried to copy the VDI file the new installation it will only let me
> import OVF files, which are apparently an exported version of your
> existing VDI files. After much muttering I swiveled around to my
> Lenovo
> laptop to export the VDI file, only to discover that VirtualBox was
> not
> installed. All the VDI files are still there in ~/.Virtualbox, but
> the
> program just ain't there.
> 
> I did not uninstall it. Something else uninstalled it, but what? Why?
> How?
> 
> I opened Synaptic and installed it on the Lenovo, and when I opened
> VirtualBox there were all my virtual machines, including Windows 10.
> The reinstalled program found the machines in ~/.VirtualBox. But
> there
> is yet another problem. The version that I just installed on the
> Lenovo
> laptop will only export in OVA format, and the version on the new
> computer will import only OVF files. And both versions of VirtualBox
> control the file manager so if it wants an OVF file that's all that
> it
> allows to appear in the folder.
> 
> More putzing around. :(


Re: [PLUG] Logical volumes: use disk or partition?

2022-02-11 Thread Michael Ewan
By disappear I meant no longer visible, you have to do the vgscan and
vgchange for the logical volumes to become visible.  Some distro do this
automatically for you.

On Fri, Feb 11, 2022 at 12:20 PM Rich Shepard 
wrote:

> On Fri, 11 Feb 2022, Michael Ewan wrote:
>
> > The steps involved are pvcreate, vgscan, vgcreate, and lvcreate. The
> > pvcreate operation labels the disks for use in a volume group. Do not use
> > the UUID. The vgscan operation finds the pv labels. The vgcreate
> operation
> > takes those disks and adds them to a volume group.
> > So...
> > pvcreate /dev/sdc1 /dev/sdd1
> > vgscan
> > vgcreate vg01 /dev/sdc /dev/sdd
> >
> > Now the fun starts, there are many options in lvcreate for striping,
> > mirroring, size, etc.  Read the man page.
> >
> > A note about pvcreate, you can use existing partitions with pvcreate in
> > order to stripe some spare space into a bigger logical volume.
> >
> > Also note that the disks and logical volumes will disappear after reboot.
> > Your OS may already have the steps built in to activate the volume
> groups,
> > but in case not here goes,
> > vgscan
> > vgchange -a y
>
> Michael,
>
> I read about the four steps and your explanation answers the question I
> asked about the hdds needing a partition. Thank you.
>
> The LV will be two disks in the external MediaSonic Probox which is not
> bootable so they should not disappear.
>
> Regards,
>
> Rich
>


Re: [PLUG] Logical volumes: use disk or partition?

2022-02-11 Thread Rich Shepard

On Fri, 11 Feb 2022, Michael Ewan wrote:


The steps involved are pvcreate, vgscan, vgcreate, and lvcreate. The
pvcreate operation labels the disks for use in a volume group. Do not use
the UUID. The vgscan operation finds the pv labels. The vgcreate operation
takes those disks and adds them to a volume group.
So...
pvcreate /dev/sdc1 /dev/sdd1
vgscan
vgcreate vg01 /dev/sdc /dev/sdd

Now the fun starts, there are many options in lvcreate for striping,
mirroring, size, etc.  Read the man page.

A note about pvcreate, you can use existing partitions with pvcreate in
order to stripe some spare space into a bigger logical volume.

Also note that the disks and logical volumes will disappear after reboot.
Your OS may already have the steps built in to activate the volume groups,
but in case not here goes,
vgscan
vgchange -a y


Michael,

I read about the four steps and your explanation answers the question I
asked about the hdds needing a partition. Thank you.

The LV will be two disks in the external MediaSonic Probox which is not
bootable so they should not disappear.

Regards,

Rich


Re: [PLUG] Logical volumes: use disk or partition?

2022-02-11 Thread Michael Ewan
The steps involved are pvcreate, vgscan, vgcreate, and lvcreate.  The
pvcreate operation labels the disks for use in a volume group.  Do not use
the UUID.  The vgscan operation finds the pv labels.  The vgcreate
operation takes those disks and adds them to a volume group.
So...
pvcreate /dev/sdc1 /dev/sdd1
vgscan
vgcreate vg01 /dev/sdc /dev/sdd

Now the fun starts, there are many options in lvcreate for striping,
mirroring, size, etc.  Read the man page.

A note about pvcreate, you can use existing partitions with pvcreate in
order to stripe some spare space into a bigger logical volume.

Also note that the disks and logical volumes will disappear after reboot.
Your OS may already have the steps built in to activate the volume groups,
but in case not here goes,
vgscan
vgchange -a y


On Fri, Feb 11, 2022 at 10:13 AM Rich Shepard 
wrote:

> Two of the disks in the MediaSonic Probox have been partitioned and had
> ext4
> installed but are otherwise empty of data. Rather than having two separate
> disks for extra external storage I want to build a single LV.
>
> The LVM docs I've read use partitions on a single hdd. I did not explictly
> make a partion on each drive for the entire size; lsblk now showes them as:
> sdc  8:32   0   1.8T  0 disk  /mnt/data2
> sdd  8:48   0   1.8T  0 disk  /mnt/data3
> and the two UUIDs are:
> /dev/sdc: UUID="8f53ff0f-08cd-4bc1-888d-f6a38de11973" TYPE="ext4"
> /dev/sdd: UUID="08b2a158-a57b-4833-b6d8-9b69dfcf673b" TYPE="ext4"
>
> What steps should I take to use the UUIDs? Do I need to partition each
> group
> so there's an /dev/sdc1 and /dev/sdd1 and use the partition UUID when
> creating the Volume Group from the two hdds, or do I use the disk UUIDs?
> Are
> there other differences? I'd like to do this correctly the first time.
>
> Rich
>
>


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Rich Shepard

On Fri, 11 Feb 2022, Michael Rasmussen wrote:


Perhaps you can read up on UUIDs? Two sources I used while reading this
thread were:

https://www.uuidtools.com/what-is-uuid
 and
https://en.wikipedia.org/wiki/Universally_unique_identifier


Thanks, Michael.

Rich


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Rich Shepard

On Fri, 11 Feb 2022, Galen Seitz wrote:


Wait! I think you are missing an important point. UUIDs are used for many
things on a modern Linux system. There will typically be *multiple* UUIDs
used for multiple purposes. Here is some trimmed output from an Ubuntu
system that has three physical disks. sda is the disk that contains the
root filesystem, along with some others. sdb and sdc are part of a raid1
array that contains a filesystem that is mounted under /backups


Galen,

Yes, I know there are different UUIDs and I have no idea now which to use
where/when.

Thanks for the longer explanation. I'll study it.


Whenever I have set up a raid1 array, it has been at distribution install
time, so the Ubuntu or CentOS tools have done it for me.  That said, I
think you should be able create the raid array using the partition names
(e.g. /dev/sdxN, /dev/sdyM, etc.), and then find the array UUID using
mdadm --examine.  Once you have the array UUID, you can drop it into
mdadm.conf.  This should make your raid array independent of any sdxN
naming.  Then you can create a filesystem on mdx.  This will create
a filesystem UUID, which you then put into fstab in order to mount
the filesystem.


Okay. This answered a couple of my questions.


Note that if you were using LVM, there would also be UUIDs associated
with that as well.


Well, yes, I'm using LVM2 here.

Now mdadm is working on the two disks, /dev/sde and /dev/sdf, as they have
no partitions. The output of blkid is:
/dev/sda1: UUID="5796-13DB" TYPE="vfat" 
PARTUUID="b365f0f1-77a8-459b-926a-d0fd9c05b985"
/dev/sda2: UUID="b7a5a639-e9b5-43c9-b528-77d6bb592899" TYPE="swap" 
PARTUUID="625ed36c-e115-4c18-ac17-287d0c857aa6"
/dev/sda3: UUID="c2286937-c658-40ee-b73d-0c80fcaa2a6b" TYPE="ext4" 
PARTUUID="2f06e476-a8fc-471d-b396-422959d37bcd"
/dev/sdb1: UUID="1debabd0-e753-468d-b119-0e76a1e4e5df" TYPE="ext4" 
PARTUUID="d7922ff2-5cd8-43fa-824f-4459fc42ae53"
/dev/sdb2: UUID="98cb2b46-092e-4dc4-94cc-f4e54b946bae" TYPE="ext4" 
PARTUUID="1aee0055-0517-4cd4-8937-2bafea640f26"
/dev/sdb3: UUID="093ae060-fd8d-48d2-9f77-acc5dab0fc56" TYPE="ext4" 
PARTUUID="4d318c80-a3b5-4515-8cbe-40eb60e5da76"
/dev/sdd: UUID="08b2a158-a57b-4833-b6d8-9b69dfcf673b" TYPE="ext4"
/dev/sde: UUID="4b4d01a7-f5d3-af65-c47f-6a39752be335" 
UUID_SUB="7372c801-036f-17d9-2c73-0fe708eeba88" LABEL="salmo:0" TYPE="linux_raid_member"
/dev/sdf: UUID="4b4d01a7-f5d3-af65-c47f-6a39752be335" 
UUID_SUB="0d6921f9-b6fc-ad2d-d2cd-2f30a3766f19" LABEL="salmo:0" TYPE="linux_raid_member"
/dev/md0: UUID="0b4ee898-1506-4264-be91-f698cbdcda52" TYPE="ext4"
/dev/sdc: UUID="8f53ff0f-08cd-4bc1-888d-f6a38de11973" TYPE="ext4"

and I want to use /dev/sdc and /dev/sdd for the LV.

Thanks,

Rich


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Michael Rasmussen




On 2022-02-11 09:34, Rich Shepard wrote:

On Fri, 11 Feb 2022, Tomas Kuchta wrote:

The point about not using /dev/sd*, especially with external 
enclosures,

is that the device letter can change (not just once) during the array
build.


Tomas,

I'll kill the mdadm create process and use the two UUIDs instead. Do I
write:
mdadm --create /dev/md0 -l 1 -n 2 UUID1 UUID2

or /dev/UUID1 /dev/UUID2?


Perhaps you can read up on UUIDs? Two sources I used while reading this 
thread were:


https://www.uuidtools.com/what-is-uuid
  and
https://en.wikipedia.org/wiki/Universally_unique_identifier



---
  Michael Rasmussen, Portland Oregon
Be Appropriate && Follow Your Curiosity


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Galen Seitz

On 2/11/22 09:34, Rich Shepard wrote:

On Fri, 11 Feb 2022, Tomas Kuchta wrote:


The point about not using /dev/sd*, especially with external enclosures,
is that the device letter can change (not just once) during the array
build.


Tomas,

I'll kill the mdadm create process and use the two UUIDs instead. Do I
write:
mdadm --create /dev/md0 -l 1 -n 2 UUID1 UUID2

or /dev/UUID1 /dev/UUID2?


Wait!  I think you are missing an important point.  UUIDs are used for many 
things
on a modern Linux system.  There will typically be *multiple* UUIDs used for
multiple purposes.  Here is some trimmed output from an Ubuntu system that has
three physical disks.  sda is the disk that contains the root filesystem, along
with some others.  sdb and sdc are part of a raid1 array that contains a
filesystem that is mounted under /backups

galens@oz:~$ cat /etc/fstab
UUID=565ba7ed-0474-4601-ba4a-64b74d1ad2bd /   ext4
errors=remount-ro 0   1
UUID=37c7b397-6ceb-4a83-8120-012bdb3e3260 /backups  ext4
defaults0 2

Note that the UUIDs used in fstab are *filesystem* UUIDs.  These are different
from an mdraid UUID, a partition UUID, or other UUIDs.  The blkid command gives
the bigger picture.

galens@oz:~$ sudo blkid
/dev/sda1: UUID="565ba7ed-0474-4601-ba4a-64b74d1ad2bd" TYPE="ext4" 
PARTUUID="58d10f2d-01"
/dev/sda5: UUID="93c54ee7-e832-4b77-8dfd-920eb740dfdb" TYPE="swap" 
PARTUUID="58d10f2d-05"
/dev/sdb1: UUID="716039ff-eb41-9a09-9dd0-30c48a77d7dd" UUID_SUB="3e3da9b0-a476-3eec-4d7b-93936071817e" 
LABEL="oz:0" TYPE="linux_raid_member" PARTLABEL="p1" PARTUUID="1ba2b9b0-81c8-44b6-9399-a7ac5a44e80c"
/dev/sdc1: UUID="716039ff-eb41-9a09-9dd0-30c48a77d7dd" UUID_SUB="0249e1ef-2337-1437-39ba-842170377019" 
LABEL="oz:0" TYPE="linux_raid_member" PARTLABEL="p1" PARTUUID="ecc07d3d-9a28-44e1-b1a0-3be1c60d7f8c"
/dev/md0: UUID="37c7b397-6ceb-4a83-8120-012bdb3e3260" TYPE="ext4"

See how sda1 has both a filesystem UUID and a partition UUID.  Similarly, md0 
has
a filesystem UUID, as it contains a filesystem, but no partition UUID.

sdb1 and sdc1 are both used to create md0.  They are both (GPT) partitions on 
physical
disks, so they each have a unique partition UUID.  UUID_SUB is a device UUID and
is unique for both sdb1 and sdc1.  This leaves UUID="7160...  This is the mdraid
array UUID.  Note how it is the same for both sdb1 and sdc1.
This is how mdadm is able to group the appropriate disks together.  See
mdadm.conf

galens@oz:~$ cat /etc/mdadm/mdadm.conf
ARRAY /dev/md0 UUID=716039ff:eb419a09:9dd030c4:8a77d7dd

When you initially create the mdraid array, I *think* that these UUIDs are
automatically created.  You can check each disk partition like this:

galens@oz:~$ sudo mdadm --examine /dev/sdb1 | grep UUID
 Array UUID : 716039ff:eb419a09:9dd030c4:8a77d7dd
Device UUID : 3e3da9b0:a4763eec:4d7b9393:6071817e


Whenever I have set up a raid1 array, it has been at distribution install
time, so the Ubuntu or CentOS tools have done it for me.  That said, I
think you should be able create the raid array using the partition names
(e.g. /dev/sdxN, /dev/sdyM, etc.), and then find the array UUID using
mdadm --examine.  Once you have the array UUID, you can drop it into
mdadm.conf.  This should make your raid array independent of any sdxN
naming.  Then you can create a filesystem on mdx.  This will create
a filesystem UUID, which you then put into fstab in order to mount
the filesystem.

Note that if you were using LVM, there would also be UUIDs associated
with that as well.


galen
--
Galen Seitz
gal...@seitzassoc.com


[PLUG] Logical volumes: use disk or partition?

2022-02-11 Thread Rich Shepard

Two of the disks in the MediaSonic Probox have been partitioned and had ext4
installed but are otherwise empty of data. Rather than having two separate
disks for extra external storage I want to build a single LV.

The LVM docs I've read use partitions on a single hdd. I did not explictly
make a partion on each drive for the entire size; lsblk now showes them as:
sdc  8:32   0   1.8T  0 disk  /mnt/data2
sdd  8:48   0   1.8T  0 disk  /mnt/data3
and the two UUIDs are:
/dev/sdc: UUID="8f53ff0f-08cd-4bc1-888d-f6a38de11973" TYPE="ext4"
/dev/sdd: UUID="08b2a158-a57b-4833-b6d8-9b69dfcf673b" TYPE="ext4"

What steps should I take to use the UUIDs? Do I need to partition each group
so there's an /dev/sdc1 and /dev/sdd1 and use the partition UUID when
creating the Volume Group from the two hdds, or do I use the disk UUIDs? Are
there other differences? I'd like to do this correctly the first time.

Rich



Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Rich Shepard

On Fri, 11 Feb 2022, Bill Barry wrote:


You should also be aware of the useful tool  blkid which lists your block
devices and their uuids.


BIll,

Ah, yes. Thanks. I forgot about that one.

Regards,

Rich


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Rich Shepard

On Fri, 11 Feb 2022, Tomas Kuchta wrote:


The point about not using /dev/sd*, especially with external enclosures,
is that the device letter can change (not just once) during the array
build.


Tomas,

I'll kill the mdadm create process and use the two UUIDs instead. Do I
write:
mdadm --create /dev/md0 -l 1 -n 2 UUID1 UUID2

or /dev/UUID1 /dev/UUID2?

Rich


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Bill Barry
On Fri, Feb 11, 2022 at 9:50 AM Tomas Kuchta
 wrote:
>
> On Fri, Feb 11, 2022, 08:39 Rich Shepard  wrote:
>
> > On Thu, 10 Feb 2022, Galen Seitz wrote:
> >
> > > Using UUIDs should prevent much of this grief. For example, here's a line
> > > from mdadm.conf on one my my machines:
> >
> > Galen/Tomas:
> >
> > Okay. I've six mdadm.conf files here, including /etc/mdadm.conf which is
> > all
> > commented out. Since mdadm has been working on creating the raid1 for about
> > 17.5 hours now there may be content in there when it's finished.
> >
> > I'll learn more about using UUIDs in fstab as well as mdadm and use them. I
> > have a record of them for the hdds in the Probox and can get the ones for
> > the SSD and HDD in the desktop from fdisk.
> > .
>
>
> The point about not using /dev/sd*, especially with external enclosures, is
> that the device letter can change (not just once) during the array build.
>
> If you want to be sure that your storage works, just go to the begging and
> use uuids to build and use the array. That is my advice anyway.
>
> There is not much to learn about uuids, they are just a disk or partition
> identifier, like /dev/sd*
>
> Note: uuids for the disk and its partitions can change when creating
> partitioning. They are assigned by fdisk/parted/etc. Essentially, you
> manage uuids yourself.
>
> Tomas

You should also be aware of the useful tool  blkid which lists your
block devices and their uuids.

BIll


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Tomas Kuchta
On Fri, Feb 11, 2022, 08:39 Rich Shepard  wrote:

> On Thu, 10 Feb 2022, Galen Seitz wrote:
>
> > Using UUIDs should prevent much of this grief. For example, here's a line
> > from mdadm.conf on one my my machines:
>
> Galen/Tomas:
>
> Okay. I've six mdadm.conf files here, including /etc/mdadm.conf which is
> all
> commented out. Since mdadm has been working on creating the raid1 for about
> 17.5 hours now there may be content in there when it's finished.
>
> I'll learn more about using UUIDs in fstab as well as mdadm and use them. I
> have a record of them for the hdds in the Probox and can get the ones for
> the SSD and HDD in the desktop from fdisk.
> .


The point about not using /dev/sd*, especially with external enclosures, is
that the device letter can change (not just once) during the array build.

If you want to be sure that your storage works, just go to the begging and
use uuids to build and use the array. That is my advice anyway.

There is not much to learn about uuids, they are just a disk or partition
identifier, like /dev/sd*

Note: uuids for the disk and its partitions can change when creating
partitioning. They are assigned by fdisk/parted/etc. Essentially, you
manage uuids yourself.

Tomas


Re: [PLUG] Remove raid1 (/dev/md0) and its disks [DONE]

2022-02-11 Thread Rich Shepard

On Thu, 10 Feb 2022, Galen Seitz wrote:


Using UUIDs should prevent much of this grief. For example, here's a line
from mdadm.conf on one my my machines:


Galen/Tomas:

Okay. I've six mdadm.conf files here, including /etc/mdadm.conf which is all
commented out. Since mdadm has been working on creating the raid1 for about
17.5 hours now there may be content in there when it's finished.

I'll learn more about using UUIDs in fstab as well as mdadm and use them. I
have a record of them for the hdds in the Probox and can get the ones for
the SSD and HDD in the desktop from fdisk.

Thanks,

Rich