FreeBSD 11.3-BETA3 Now Available

2019-06-07 Thread Glen Barber
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

The third BETA build of the 11.3-RELEASE release cycle is now available.

Installation images are available for:

o 11.3-BETA3 amd64 GENERIC
o 11.3-BETA3 i386 GENERIC
o 11.3-BETA3 powerpc GENERIC
o 11.3-BETA3 powerpc64 GENERIC64
o 11.3-BETA3 sparc64 GENERIC
o 11.3-BETA3 armv6 BANANAPI
o 11.3-BETA3 armv6 BEAGLEBONE
o 11.3-BETA3 armv6 CUBIEBOARD
o 11.3-BETA3 armv6 CUBIEBOARD2
o 11.3-BETA3 armv6 CUBOX-HUMMINGBOARD
o 11.3-BETA3 armv6 RPI-B
o 11.3-BETA3 armv6 RPI2
o 11.3-BETA3 armv6 PANDABOARD
o 11.3-BETA3 armv6 WANDBOARD
o 11.3-BETA3 aarch64 GENERIC

Note regarding arm SD card images: For convenience for those without
console access to the system, a freebsd user with a password of
freebsd is available by default for ssh(1) access.  Additionally,
the root user password is set to root.  It is strongly recommended
to change the password for both users after gaining access to the
system.

Installer images and memory stick images are available here:

https://download.freebsd.org/ftp/releases/ISO-IMAGES/11.3/

The image checksums follow at the end of this e-mail.

If you notice problems you can report them through the Bugzilla PR
system or on the -stable mailing list.

If you would like to use SVN to do a source based update of an existing
system, use the "stable/11" branch.

A summary of changes since 11.3-BETA2 includes:

o Support for the IPV6_NEXTHOP option has been restored.

o Warnings for IPsec algorithms deprecated in RFC 8221 have been added.

o Fix for FC-Tape bugs.

o A fix in jail_getid(3) for jail(8) ID 0.

o Warnings for weaker geli(4) algorithms have been added.

o Various updates and fixes in libarchive(3).

o A fix in cxgbe(4) to address a connection hang when running iozone
  over an NFS-mounted share.

o A fix to the zfs(8) 'userspace' subcommand where all unresolved UIDs
  after the first were ignored.

o An apm(8) fix to correct battery life calculation.

o The default size of Vagrant images has been increased.

o Reporting on deprecated features for all major FreeBSD versions has
  been merged.

A list of changes since 11.2-RELEASE is available in the stable/11
release notes:

https://www.freebsd.org/relnotes/11-STABLE/relnotes/article.html

Please note, the release notes page is not yet complete, and will be
updated on an ongoing basis as the 11.3-RELEASE cycle progresses.

=== Virtual Machine Disk Images ===

VM disk images are available for the amd64, i386, and aarch64
architectures.  Disk images may be downloaded from the following URL
(or any of the FreeBSD FTP mirrors):

https://download.freebsd.org/ftp/releases/VM-IMAGES/11.3-BETA3/

The partition layout is:

~ 16 kB - freebsd-boot GPT partition type (bootfs GPT label)
~ 1 GB  - freebsd-swap GPT partition type (swapfs GPT label)
~ 20 GB - freebsd-ufs GPT partition type (rootfs GPT label)

The disk images are available in QCOW2, VHD, VMDK, and raw disk image
formats.  The image download size is approximately 135 MB and 165 MB
respectively (amd64/i386), decompressing to a 21 GB sparse image.

Note regarding arm64/aarch64 virtual machine images: a modified QEMU EFI
loader file is needed for qemu-system-aarch64 to be able to boot the
virtual machine images.  See this page for more information:

https://wiki.freebsd.org/arm64/QEMU

To boot the VM image, run:

% qemu-system-aarch64 -m 4096M -cpu cortex-a57 -M virt  \
-bios QEMU_EFI.fd -serial telnet::,server -nographic \
-drive if=none,file=VMDISK,id=hd0 \
-device virtio-blk-device,drive=hd0 \
-device virtio-net-device,netdev=net0 \
-netdev user,id=net0

Be sure to replace "VMDISK" with the path to the virtual machine image.

=== Amazon EC2 AMI Images ===

FreeBSD/amd64 EC2 AMIs are available in the following regions:

  eu-north-1 region: ami-07fd27786377bc6fd
  ap-south-1 region: ami-01f208a9f001a22e2
  eu-west-3 region: ami-085439f21755d95a4
  eu-west-2 region: ami-0993e4ba21a62262d
  eu-west-1 region: ami-0f2f6a13b79dd804b
  ap-northeast-2 region: ami-07164fb9df8db807f
  ap-northeast-1 region: ami-0c1b2bbd0b1cced6e
  sa-east-1 region: ami-0d51b7b8c6a2f8a57
  ca-central-1 region: ami-054c4785980cbfbb4
  ap-southeast-1 region: ami-07cbfed103b47434a
  ap-southeast-2 region: ami-06e7f111242f4a03e
  eu-central-1 region: ami-05b82446f270f2c7e
  us-east-1 region: ami-0b3ea59d3140af471
  us-east-2 region: ami-0b59f21c8a159bf51
  us-west-1 region: ami-0a6d215b372bd8a86
  us-west-2 region: ami-0861887499c7e29c3

=== Vagrant Images ===

FreeBSD/amd64 images are available on the Hashicorp Atlas site, and can
be installed by running:

% vagrant init freebsd/FreeBSD-11.3-BETA3
% vagrant up

=== Upgrading ===

The freebsd-update(8) utility supports binary upgrades of amd64 and i386
systems running earlier FreeBSD releases.  Systems running earlier
FreeBSD releases can upgrade as follows:

# freebsd-update upgrade -r 11.3-BETA3

During this process, 

Re: ZFS...

2019-06-07 Thread Miroslav Lachman

Michelle Sullivan wrote on 2019/06/07 01:49:
Yes but you seem to have done this with ZFS too, just not in this 
particularly bad case.


There is no r-studio for zfs or I would have turned to it as soon as 
this issue hit.




So as an update, this Company: http://www.klennet.com/ produce a ZFS 
recovery tool: https://www.klennet.com/zfs-recovery/default.aspx and 
following several code changes due to my case being an 'edge case' the 
entire volume (including the zvol - which I previously recovered as it 
wasn't suffering from the metadata corruption) and all 34 million files 
is being recovered intact with the entire directory structure.  Its only 
drawback is it's a windows only tool, so I built 'windows on a stick' 
and it's running from that.  The only thing I had to do was physically 
pull the 'spare' out as the spare already had data on it from being 
previously swapped in and it confused the hell out of the algorithm that 
detects the drive order.


It's really good to know there exists some tool which can recover files 
from broken ZFS. Thank you for sharing your very long story and I am 
glad you recovered all your data.
It would be very nice to have similar tool running on FreeBSD... maybe 
it is good topic for next google summer of code project.


Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS...

2019-06-07 Thread Steven Hartland
Great to hear you got your data back even after all the terrible luck you
suffered!

  Regards
  Steve

On Fri, 7 Jun 2019 at 00:49, Michelle Sullivan  wrote:

> Michelle Sullivan wrote:
> >> On 02 May 2019, at 03:39, Steven Hartland 
> wrote:
> >>
> >>
> >>
> >>> On 01/05/2019 15:53, Michelle Sullivan wrote:
> >>> Paul Mather wrote:
> > On Apr 30, 2019, at 11:17 PM, Michelle Sullivan 
> wrote:
> >
> > Been there done that though with ext2 rather than UFS..  still got
> all my data back... even though it was a nightmare..
> 
>  Is that an implication that had all your data been on UFS (or ext2:)
> this time around you would have got it all back?  (I've got that impression
> through this thread from things you've written.) That sort of makes it
> sound like UFS is bulletproof to me.
> >>> Its definitely not (and far from it) bullet proof - however when the
> data on disk is not corrupt I have managed to recover it - even if it has
> been a nightmare - no structure - all files in lost+found etc... or even
> resorting to r-studio in the even of lost raid information etc..
> >> Yes but you seem to have done this with ZFS too, just not in this
> particularly bad case.
> >>
> > There is no r-studio for zfs or I would have turned to it as soon as
> this issue hit.
> >
> >
> >
> So as an update, this Company: http://www.klennet.com/ produce a ZFS
> recovery tool: https://www.klennet.com/zfs-recovery/default.aspx and
> following several code changes due to my case being an 'edge case' the
> entire volume (including the zvol - which I previously recovered as it
> wasn't suffering from the metadata corruption) and all 34 million files
> is being recovered intact with the entire directory structure.  Its only
> drawback is it's a windows only tool, so I built 'windows on a stick'
> and it's running from that.  The only thing I had to do was physically
> pull the 'spare' out as the spare already had data on it from being
> previously swapped in and it confused the hell out of the algorithm that
> detects the drive order.
>
> Regards,
>
> Michelle
>
> --
> Michelle Sullivan
> http://www.mhix.org/
>
>
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"