Re: softraid encrypting discipline

2023-11-17 Thread Masanori Ogino
Hi,

On Friday, November 17th, 2023 at 08:26, D.A.  wrote:
> What encryption algorithm does softraid?

According to the presentation titled "softraid(4) boot" by Stefan Sperling
at EuroBSDCon 2015, AES with 256-bit key(s) in XTS mode.

(You can check sys/dev/softraid_crypto.c in sys.tar.gz if desired.)

Best,
Masanori



Re: Softraid crypto metadata backup

2023-01-08 Thread Nathan Carruth
>On Sat, Jan 07, 2023 at 02:33:31PM +, Nathan Carruth wrote:
>>The way I see it, this depends on one's use case.
>>There certainly are cases where it is important to be able
>>to irrevocably destroy all data in an instant. But there are
>>also use cases where one is only interested in making sure
>>that the average person couldn’t access one’s data if one lost
>>one’s laptop/external drive.
>>
>>I still think that anyone with the second use case could benefit
>>from more documentation as I suggested, but I get the feeling
>>this opinion is in the distinct minority here.
>
>If you're part of that supposed minority: count me in. If it's true
>that the headers of encrypted disks on Openbsd are set up in a similar
>way as on e.g. Linux, then it's actually a good idea to be able to
>have precise knowledge about how to backup that header on OBSD.

> From what I learned the preparation for that failure includes first a
>copy of the data on that encrypted disk to a second or - even better -
>a third encrypted one.
>
>But copying back a whole disk because on the original broken one it's
>just the header that went south might be an effort that is a least
>avoidable. And this when part two of disaster preparation might be
>helpful: the so-called header backup.
>
>I don't know how often, and if, such header corruptions on encrypted
>disks happen on OBSD, but on LUKS/cryptsetup encrypted disks on
>Linux this does not seem to be that unusual - from the LUKS FAQ:
>
>"By far the most questions on the cryptsetup mailing list are from
>  people that managed to damage the start of their LUKS partitions,
>  i.e. the LUKS header. In most cases, there is nothing that can be done
>  to help these poor souls recover their data. Make sure you understand
>  the problem and limitations imposed by the LUKS security model BEFORE
>  you face such a disaster! In particular, make sure you have a current
>  header backup before doing any potentially dangerous operations."
>
>https://gitlab.com/cryptsetup/cryptsetup/-/blob/main/FAQ.md
>
>And so, that no one is getting me wrong: I don't expect anyone to
>create the documentation for me. Definitively not. But if disk header
>corruption can be a problem on Openbsd, too, then this very
>possibility helps at least to understand the point the OP was trying
>to make when starting this thread.
>
>Regards,
>Wolfgang

I wasn't going to say anything more, but after reading this I figured
I ought to at least suggest an update to the documentation. Since I'm
definitely not qualified to document the details, all I can really do
is something like what is posted below (these should be diffs to the
current versions of src/share/man/man4/softraid.4 and
www/faq/faq14.html on CVS).

Thanks,
Nathan

For softraid.4:

@@ -270,0 +271,4 @@                                                             
                                 
+.Pp
+The CRYPTO discipline emphasizes confidentiality over integrity.
+In particular, corruption of the on-disk metadata will render all encrypted
+data permanently inaccessible.

For faq14.html:

@@ -835,0 +836,9 @@                                                             
                                 
+Note
+
+Decryption requires the use of on-disk metadata documented in the
+https://cvsweb.openbsd.org/src/sys/dev/softraidvar.h;>source.
+Corruption of this metadata can easily render all encrypted data
+permanently inaccessible.
+There is at present no automated way to backup this metadata.
+  



Re: Softraid crypto metadata backup

2023-01-08 Thread Wolfgang Pfeiffer

On Sat, Jan 07, 2023 at 02:33:31PM +, Nathan Carruth wrote:

The way I see it, this depends on one's use case. 
There certainly are cases where it is important to be able
to irrevocably destroy all data in an instant. But there are
also use cases where one is only interested in making sure
that the average person couldn’t access one’s data if one lost
one’s laptop/external drive.

I still think that anyone with the second use case could benefit
from more documentation as I suggested, but I get the feeling
this opinion is in the distinct minority here.


If you're part of that supposed minority: count me in. If it's true
that the headers of encrypted disks on Openbsd are set up in a similar
way as on e.g. Linux, then it's actually a good idea to be able to
have precise knowledge about how to backup that header on OBSD.

I followed this thread. But kept silent so far - simply because I
basically don't know much about how disk headers are organized on
Openbsd.



So — thanks to everyone for the answers, I’m signing off
this question now.

Take care and stay secure,
Nathan


Nathan Carruth writes:

permanently and irrevocably destroy all data on your entire disk”.


This is a feature.


Can be a feature in some situations, yes. But that's just one part of
the story, if I understand correctly.


More so, it's the very point in an encrypted filesystem. If you
haven't planned for this failure scenario


The OP was trying to prepare exactly for this very scenario. That was
obviously the whole purpose of starting the thread.

From what I learned the preparation for that failure includes first a
copy of the data on that encrypted disk to a second or - even better -
a third encrypted one.

But copying back a whole disk because on the original broken one it's
just the header that went south might be an effort that is a least
avoidable. And this when part two of disaster preparation might be
helpful: the so-called header backup.

I don't know how often, and if, such header corruptions on encrypted
disks happen on OBSD, but on LUKS/cryptsetup encrypted disks on
Linux this does not seem to be that unusual - from the LUKS FAQ:

"By far the most questions on the cryptsetup mailing list are from
 people that managed to damage the start of their LUKS partitions,
 i.e. the LUKS header. In most cases, there is nothing that can be done
 to help these poor souls recover their data. Make sure you understand
 the problem and limitations imposed by the LUKS security model BEFORE
 you face such a disaster! In particular, make sure you have a current
 header backup before doing any potentially dangerous operations."

https://gitlab.com/cryptsetup/cryptsetup/-/blob/main/FAQ.md

And so, that no one is getting me wrong: I don't expect anyone to
create the documentation for me. Definitively not. But if disk header
corruption can be a problem on Openbsd, too, then this very
possibility helps at least to understand the point the OP was trying
to make when starting this thread.

Regards,
Wolfgang


then what are you doing using a device which *by design* can
irrevocably trash its contents in an instant?




Re: ??????: Softraid crypto metadata backup

2023-01-05 Thread Crystal Kolipe
Hi,

Please fix your email client to correctly attribute quotes in list mail that
you reply to.

On Thu, Jan 05, 2023 at 02:13:53PM +, Nathan Carruth wrote:
> Thank you for your response (apologies that I just saw this).
> 
> I will have a look at the file you mentioned.
> 
> I am curious what you mean by this:
> 
> ??? Backing up, restoring or
> otherwise messing with the softraid metadata without using the standard tools
> is an advanced subject???
> 
> as far as I know there aren???t any standard tools for doing any
> of this? If there is, it is probably all I need.

The standard tools, (basically bioctl), allow you to create softraid volumes,
change the passphrase and do a few other tasks.  I published a separate
program to resize crypto volumes.

This is all that most users need.

You are asking about and trying to do something that is completely outside the
scope of being 'supported'.  It's not recommended, nor considered to be
useful.



Re: Softraid crypto metadata backup

2023-01-05 Thread Crystal Kolipe
On Thu, Jan 05, 2023 at 05:13:05AM +, Nathan Carruth wrote:
> Perhaps I should have clarified my use case. I have data which
> is potentially legally privileged and which I also cannot afford
> to lose. Thus an unencrypted backup is out of the question, and
> my first thought was to use full-disk encryption for the backup.

For this particular use-case, you can just pipe tar through the openssl
command line utility and write the backup to removable media either directly
or as a file on a non-encrypted filesystem.

I wrote in detail about this method in an article about doing encrypted
backups to blu-ray disc:

https://research.exoticsilicon.com/articles/crystal_does_optical



Re: Softraid crypto metadata backup

2023-01-05 Thread Crystal Kolipe
On Thu, Jan 05, 2023 at 05:13:05AM +, Nathan Carruth wrote:
> I presume that OpenBSD also writes on-disk metadata of the
> same sort somewhere. Where?

Look at /usr/src/sys/dev/softraidvar.h.

The structures that contain the softraid metadata are defined there.  There is
general softraid metadata, and crypto specific metadata.

These are stored near the beginning of the RAID partition as defined in the
disklabel.  In fact, they are SR_META_OFFSET blocks from the start, which is
currently 8192 bytes.

You can also look at this on your own disk with dd and hexdump to familiarise
yourself with what the layout looks like, (useful for future reference).  Or
read my article about resizing softraid volumes for some examples.

> I know I could dig this out of
> the source code

The source code is the definitive reference.  And it can change.

> As it stands, the documentation gives no hint that softraid
> crypto gives any additional risk of data loss.

Just about any additional layer on top of a storage volume increases the
complexity of the system, which some people might regard as 'additional risk'.

This is in no way specific to softraid crypto.

> If there are in
> fact e.g. salt values written in an unknown location on the
> disk

It's not unknown, it's documented quite clearly in the source code.

> whose loss renders THE ENTIRE DISK cryptographically
> inaccessible, surely this ought to be documented somewhere?

By definition, losing the salt value used with any effective crypto system
_should_ make it inaccessible!  This is even considered a feature, because you
can effectively erase the disk just by destroying the metadata.

> While I agree with you that there are
> definite security risks in backing up such metadata, surely
> the decision as to what to do ought to be left to the end user,
> rather than being enforced by lack of documentation?

The source code is the definitive documentation.  Backing up, restoring or
otherwise messing with the softraid metadata without using the standard tools
is an advanced subject, so it's quite reasonable to expect anybody wanting to
do this to read and understand the source rather than having it spelt out in a
manual page or other documentation.

If it was documented elsewhere, that documentation would have to be kept up to
date with the current source, otherwise it could end up causing more problems
than it solves.

In any case, what you are proposing to do, (back up the softraid crypto
metadata), is almost certainly a waste of time, as it is extremely unlikely
that you will ever be in a situation where such a backup would be useful.

Additionally, if you _do_ decide to go ahead with this, then in the very
unlikely event that you corrupt the metadata on the main disk and want to
restore it from a backup, please do your research _before_ trying to restore
it.  It would be very easy to corrupt the disk further by dd'ing the wrong
data to the wrong place.

There have been a lot of posts to the mailing lists in the past by people who
have tried to fix disk partitioning problems by themselves and made the
situation worse.

What you are proposing sounds to me like a foot gun.



Re: Softraid crypto metadata backup

2023-01-02 Thread Nick Holland

On 1/2/23 22:22, Nathan Carruth wrote:

Does a softraid(4) crypto volume require metadata backup? (I am
running amd64 OpenBSD 6.9 if it is relevant, will probably
upgrade in the next few months.)

I understand FreeBSD GELI (e.g.) requires such a backup to protect
against crypto-related metadata corruption rendering the encrypted
volume inaccessible.

Neither the OpenBSD disk FAQ nor the man pages for softraid(4) or
bioctl(8) have anything to say about the matter. Web searches also
turn up no relevant information.


Storage requires backup.
Encrypted storage is (by design) more fragile than unencrypted storage.
Sounds like you are trying to protect against ONE form of storage
failure and avoid the solution you really need to have: a good backup
system, to deal with *all* forms of storage failure.

I'd suggest a good backup system...to deal with ALL forms of data loss.
Yes, encrypted storage implies a certain care has to be taken with the
backups as well, you need to pick a solution that is appropriate for
your needs -- or accept that yeah, stuff will go bye-bye someday.

I don't see a benefit to trying to protect against some single failure
mode when all the other failure modes still exist.  If you have good
backups, you are good.  If you don't, dealing with a 1% problem isn't
going to change much.

Nick.



Re: softraid disk read error

2022-10-18 Thread Nick Holland

On 10/18/22 09:35, se...@0x.su wrote:

I have raid1 volume (one of two on PC) with 2 disks.

# disklabel sd5
# /dev/rsd5c:
type: SCSI
disk: SCSI disk
label: SR RAID 1
duid: 7a03a84165b3d165
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 243201
total sectors: 3907028640
boundstart: 0
boundend: 3907028640
drivedata: 0

16 partitions:
#size   offset  fstype [fsize bsize   cpg]
   a:   39070286080  4.2BSD   8192 65536 52270 # /home/vmail
   c:   39070286400  unused


Recently I got an error in dmesg

mail# dmesg | grep retry
sd5: retrying read on block 767483392

(This happened during copying process)

and system marked volume as degraded

mail# bioctl sd5
Volume  Status   Size Device
softraid0 1 Degraded2000398663680 sd5 RAID1
   0 Online  2000398663680 1:0.0   noencl 
   1 Offline 2000398663680 1:1.0   noencl 

I tried to reread this sector (and a couple around) with dd to make sure
the sector is unreadable:

mail# dd if=/dev/rsd3c of=/dev/null bs=512 count=16 skip=767483384
16+0 records in
16+0 records out
8192 bytes transferred in 0.025 secs (316536 bytes/sec)
mail# dd if=/dev/rsd5c of=/dev/null bs=512 count=16 skip=767483384
16+0 records in
16+0 records out
8192 bytes transferred in 0.050 secs (161303 bytes/sec)

but error did not appeared.
Are there any methods to check if sector is bad (preferably on the fly)?
If this is not a disk error (im going to replace cables just in case)
should i just get disk back online with
bioctl -R /dev/sd3a sd5
?


You made some assumptions about the math that the disk uses vs. the math
dd uses, and I'm not sure I agree with them.  I'd suggest doing a dd read
of the entire disk (rsd3c), rather than trying to read just the one
sector.  Remember, there's an offset between the sectors of sd5 (the
softraid drive) and sd2 & sd3 where sd5 lives.  So I'd kinda expect your
sd3 check to pass because you missed the bad spot, and I'd expect your
sd5 check to pass because the bad drive is locked out of the array and
no longer a problem.

IF you are a cheap *** or the machine is in another country, you might
want to try dd'ing zeros and 0xff's over the entire disk before putting it
back in the array.  That sometimes triggers a discovery of a bad spot and
locks it out and replaces it with a spare.  I've had some success with
this process, actually, though it's a bad idea. :)

Nick.



Re: Softraid on NVMe

2022-05-06 Thread Nick Holland

On 5/6/22 9:03 AM, Proton wrote:

Hi,

I'm using softraid 1C on my remote dedicated server, built on two NVMe disks.
It works really well from performance perspective and provide some data 
protection,
but there is no way to check device health status because SMART doesn’t work.
I guess bioctl will tell me only if devices are ‚online’, but nothing more?


wella softraid device isn't a physical device, so, I'm not sure
what you would get that you couldn't get out of bioctl.  I have:
  bioctl softraid0
in my /etc/daily.local, and I also have a backup system that checks softraid
status on all systems (hey, as long as I'm in the neighborhood and doing
stuff as root...)

You can look at the SMART status of the underlying physical devices in
the softraid set exactly as you would non-softraid drives.

So, if you put a lot of faith in SMART (I don't), what are you missing?


Are there any "poor man’s” methods for checking state of devices you would 
suggest
to perform periodically - like ‚cat /dev/rsd0c > /dev/null’ + ‚cat /dev/rsd1c > 
/dev/null’?
Will potential I/O errors or timeouts be reported to stderr or to some system 
log file?


doing read tests like that over the entire underlying drives seems like
a good idea to me. Haven't implemented it so I can't say how it would
respond to real problems, but I can think of only one good way to find
out.  (from experience: how things act when a drive fails are hard to
predict and really hard to test.  So even a dozen "this is how it behaved"
results doesn't tell you what happens for the NEXT failure)

I would definitely want to put some rate limiting on it so you don't
kill performance overall.


As last method I can reboot to linux rescue from time to time, but this would 
be not very convenient.

Should I forget about NVMe and use other option - LSI MegaRaid HW with SSD 
disks attached?


what would you gain there?  Now you could only access what the
controller thinks of the drive's state through bioctl (which
you seemed to think was inadequate for softraid).

In the HW vs. SW RAID argument, I'm firmly in the "either way" camp,
but if I understand your query, you are LOSING info here.

(I've also heard stories about SSDs and HW RAID not playing well
together, but I'm not prepared to defend or refute that statement.
On the other hand, I've seen SSDs work differently enough from what
HW and SW expect that ... nothing would surprise me).

Nick.



Re: softraid/bioctl cant find device /dev/bio

2020-08-03 Thread Sven F.
On Mon, Aug 3, 2020 at 2:09 PM Brian Brombacher 
wrote:

>
>
> > On Aug 3, 2020, at 12:22 PM, sven falempin 
> wrote:
> >
> > On Mon, Aug 3, 2020 at 12:00 PM Brian Brombacher 
> > wrote:
> >
> >>
> >>
> >> On Aug 3, 2020, at 11:51 AM, sven falempin 
> >> wrote:
> >>
> >> 
> >>
> >>
> >>> On Mon, Aug 3, 2020 at 11:38 AM Brian Brombacher  >
> >>> wrote:
> >>>
> >>>
> >>>
>  On Aug 3, 2020, at 9:54 AM, sven falempin 
> >>> wrote:
> 
>  Hello
> 
>  I saw a similar issue in the mailing list around decembre 2019,
>  following an electrical problem softraid doesn't bring devices ups
> 
> 
>  # ls /dev/sd??
>  /dev/sd0a /dev/sd0g /dev/sd0m /dev/sd1c /dev/sd1i /dev/sd1o /dev/sd2e
>  /dev/sd2k
>  /dev/sd0b /dev/sd0h /dev/sd0n /dev/sd1d /dev/sd1j /dev/sd1p /dev/sd2f
>  /dev/sd2l
>  /dev/sd0c /dev/sd0i /dev/sd0o /dev/sd1e /dev/sd1k /dev/sd2a /dev/sd2g
>  /dev/sd2m
>  /dev/sd0d /dev/sd0j /dev/sd0p /dev/sd1f /dev/sd1l /dev/sd2b /dev/sd2h
>  /dev/sd2n
>  /dev/sd0e /dev/sd0k /dev/sd1a /dev/sd1g /dev/sd1m /dev/sd2c /dev/sd2i
>  /dev/sd2o
>  /dev/sd0f /dev/sd0l /dev/sd1b /dev/sd1h /dev/sd1n /dev/sd2d /dev/sd2j
>  /dev/sd2p
>  # dmesg | grep 6.7
>  OpenBSD 6.7 (RAMDISK_CD) #177: Thu May  7 11:19:02 MDT 2020
>  # dmesg | grep sd
>    dera...@amd64.openbsd.org:
> /usr/src/sys/arch/amd64/compile/RAMDISK_CD
>  wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
>  sd0 at scsibus1 targ 0 lun 0: 
>  t10.ATA_QEMU_HARDDISK_Q
>  M5_
>  sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>  sd1 at scsibus1 targ 1 lun 0: 
>  t10.ATA_QEMU_HARDDISK_Q
>  M7_
>  sd1: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>  wskbd0 at pckbd0: console keyboard, using wsdisplay1
>  softraid0: trying to bring up sd2 degraded
>  softraid0: sd2 was not shutdown properly
>  softraid0: sd2 is offline, will not be brought online
>  # bioctl -d sd2
>  bioctl: Can't locate sd2 device via /dev/bio
>  #
> 
>  I suspect a missing devices in /dev ( but it seems i have the required
> >>> one )
>  and MAKEDEV all of course did a `uid 0 on /: out of inodes`
> 
>  I have backups but i ' d like to fix the issue !
> >>>
> >>> Hi Sven,
> >>>
> >>> The device sd2 wasn’t attached by softraid, your /dev/bio is fine.
> This
> >>> can happen if softraid fails to find all component disks or the
> metadata on
> >>> one or more components does not match expectations (newer metadata
> seen on
> >>> other disks).  Make sure all of the component disks are working.  If
> that
> >>> is not the issue, you may need to re-run the command that you used to
> >>> create the array and include -C force.  Be very careful doing this, I
> >>> suggest running the command once without -C force to ensure it found
> all
> >>> the components and fails to bring the array up due to the same error
> >>> message you got (attempt to bring up degraded).
> >>>
> >>> If you’re not careful, you can blow out the whole array.
> >>>
> >>> -Brian
> >>>
> >>>
> >>> The disk looks fine, the disklabel is ok, the array is just sd0 and
> sda1
> >> both got the disklabel RAID part,
> >> shall i do further checks ?
> >>
> >> # bioctl -c 1 -l /dev/sd0a,/dev/sd1a softraid0
> >> softraid0: trying to bring up sd2 degraded
> >> softraid0: sd2 was not shutdown properly
> >> softraid0: sd2 is offline, will not be brought online
> >> softraid0: trying to bring up sd2 degraded
> >> softraid0: sd2 was not shutdown properly
> >> softraid0: sd2 is offline, will not be brought online
> >>
> >> I wouldnt like to blow the whole array ! sd0a should be in perfect
> >> condition but unsure about sd1a, i probably need to bioctl -R sd1
> >>
> >>
> >> Traditionally at this point, I would run the command again with -C force
> >> and my RAID 1 array is fine.  I might be doing dangerous things and not
> >> know, so other voices please chime in.
> >>
> >> [Moved to misc@]
> >>
> >>
> >>
> >>
> > # bioctl -C force -c 1 -l /dev/sd0a,/dev/sd1a softraid0
> > sd2 at scsibus2 targ 1 lun 0: 
> > sd2: 1907726MB, 512 bytes/sector, 3907023473 sectors
> > softraid0: RAID 1 volume attached as sd2
> >
> > both volumes are online , partitions are visible
> > but fsck is not happy at all :-(
> >
> > Can i do something before fsck -y ( i have backups )
>
> Make sure your backups are good.
>
> Run fsck -n and see how wicked the issues are.  It may just be cleaning
> itself up after the electrical outage.
>

>
I’m glad I have multiple partition and serious backup, waiting for disk
change number two is dead 

Thanks for the help!

> --
--
-
Knowing is not enough; we must apply. Willing is not enough; we must do


Re: softraid/bioctl cant find device /dev/bio

2020-08-03 Thread Brian Brombacher



> On Aug 3, 2020, at 12:22 PM, sven falempin  wrote:
> 
> On Mon, Aug 3, 2020 at 12:00 PM Brian Brombacher 
> wrote:
> 
>> 
>> 
>> On Aug 3, 2020, at 11:51 AM, sven falempin 
>> wrote:
>> 
>> 
>> 
>> 
>>> On Mon, Aug 3, 2020 at 11:38 AM Brian Brombacher 
>>> wrote:
>>> 
>>> 
>>> 
 On Aug 3, 2020, at 9:54 AM, sven falempin 
>>> wrote:
 
 Hello
 
 I saw a similar issue in the mailing list around decembre 2019,
 following an electrical problem softraid doesn't bring devices ups
 
 
 # ls /dev/sd??
 /dev/sd0a /dev/sd0g /dev/sd0m /dev/sd1c /dev/sd1i /dev/sd1o /dev/sd2e
 /dev/sd2k
 /dev/sd0b /dev/sd0h /dev/sd0n /dev/sd1d /dev/sd1j /dev/sd1p /dev/sd2f
 /dev/sd2l
 /dev/sd0c /dev/sd0i /dev/sd0o /dev/sd1e /dev/sd1k /dev/sd2a /dev/sd2g
 /dev/sd2m
 /dev/sd0d /dev/sd0j /dev/sd0p /dev/sd1f /dev/sd1l /dev/sd2b /dev/sd2h
 /dev/sd2n
 /dev/sd0e /dev/sd0k /dev/sd1a /dev/sd1g /dev/sd1m /dev/sd2c /dev/sd2i
 /dev/sd2o
 /dev/sd0f /dev/sd0l /dev/sd1b /dev/sd1h /dev/sd1n /dev/sd2d /dev/sd2j
 /dev/sd2p
 # dmesg | grep 6.7
 OpenBSD 6.7 (RAMDISK_CD) #177: Thu May  7 11:19:02 MDT 2020
 # dmesg | grep sd
   dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
 wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
 sd0 at scsibus1 targ 0 lun 0: 
 t10.ATA_QEMU_HARDDISK_Q
 M5_
 sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
 sd1 at scsibus1 targ 1 lun 0: 
 t10.ATA_QEMU_HARDDISK_Q
 M7_
 sd1: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
 wskbd0 at pckbd0: console keyboard, using wsdisplay1
 softraid0: trying to bring up sd2 degraded
 softraid0: sd2 was not shutdown properly
 softraid0: sd2 is offline, will not be brought online
 # bioctl -d sd2
 bioctl: Can't locate sd2 device via /dev/bio
 #
 
 I suspect a missing devices in /dev ( but it seems i have the required
>>> one )
 and MAKEDEV all of course did a `uid 0 on /: out of inodes`
 
 I have backups but i ' d like to fix the issue !
>>> 
>>> Hi Sven,
>>> 
>>> The device sd2 wasn’t attached by softraid, your /dev/bio is fine.  This
>>> can happen if softraid fails to find all component disks or the metadata on
>>> one or more components does not match expectations (newer metadata seen on
>>> other disks).  Make sure all of the component disks are working.  If that
>>> is not the issue, you may need to re-run the command that you used to
>>> create the array and include -C force.  Be very careful doing this, I
>>> suggest running the command once without -C force to ensure it found all
>>> the components and fails to bring the array up due to the same error
>>> message you got (attempt to bring up degraded).
>>> 
>>> If you’re not careful, you can blow out the whole array.
>>> 
>>> -Brian
>>> 
>>> 
>>> The disk looks fine, the disklabel is ok, the array is just sd0 and sda1
>> both got the disklabel RAID part,
>> shall i do further checks ?
>> 
>> # bioctl -c 1 -l /dev/sd0a,/dev/sd1a softraid0
>> softraid0: trying to bring up sd2 degraded
>> softraid0: sd2 was not shutdown properly
>> softraid0: sd2 is offline, will not be brought online
>> softraid0: trying to bring up sd2 degraded
>> softraid0: sd2 was not shutdown properly
>> softraid0: sd2 is offline, will not be brought online
>> 
>> I wouldnt like to blow the whole array ! sd0a should be in perfect
>> condition but unsure about sd1a, i probably need to bioctl -R sd1
>> 
>> 
>> Traditionally at this point, I would run the command again with -C force
>> and my RAID 1 array is fine.  I might be doing dangerous things and not
>> know, so other voices please chime in.
>> 
>> [Moved to misc@]
>> 
>> 
>> 
>> 
> # bioctl -C force -c 1 -l /dev/sd0a,/dev/sd1a softraid0
> sd2 at scsibus2 targ 1 lun 0: 
> sd2: 1907726MB, 512 bytes/sector, 3907023473 sectors
> softraid0: RAID 1 volume attached as sd2
> 
> both volumes are online , partitions are visible
> but fsck is not happy at all :-(
> 
> Can i do something before fsck -y ( i have backups )

Make sure your backups are good.

Run fsck -n and see how wicked the issues are.  It may just be cleaning itself 
up after the electrical outage.





Re: softraid/bioctl cant find device /dev/bio

2020-08-03 Thread sven falempin
On Mon, Aug 3, 2020 at 12:00 PM Brian Brombacher 
wrote:

>
>
> On Aug 3, 2020, at 11:51 AM, sven falempin 
> wrote:
>
> 
>
>
> On Mon, Aug 3, 2020 at 11:38 AM Brian Brombacher 
> wrote:
>
>>
>>
>> > On Aug 3, 2020, at 9:54 AM, sven falempin 
>> wrote:
>> >
>> > Hello
>> >
>> > I saw a similar issue in the mailing list around decembre 2019,
>> > following an electrical problem softraid doesn't bring devices ups
>> >
>> >
>> > # ls /dev/sd??
>> > /dev/sd0a /dev/sd0g /dev/sd0m /dev/sd1c /dev/sd1i /dev/sd1o /dev/sd2e
>> > /dev/sd2k
>> > /dev/sd0b /dev/sd0h /dev/sd0n /dev/sd1d /dev/sd1j /dev/sd1p /dev/sd2f
>> > /dev/sd2l
>> > /dev/sd0c /dev/sd0i /dev/sd0o /dev/sd1e /dev/sd1k /dev/sd2a /dev/sd2g
>> > /dev/sd2m
>> > /dev/sd0d /dev/sd0j /dev/sd0p /dev/sd1f /dev/sd1l /dev/sd2b /dev/sd2h
>> > /dev/sd2n
>> > /dev/sd0e /dev/sd0k /dev/sd1a /dev/sd1g /dev/sd1m /dev/sd2c /dev/sd2i
>> > /dev/sd2o
>> > /dev/sd0f /dev/sd0l /dev/sd1b /dev/sd1h /dev/sd1n /dev/sd2d /dev/sd2j
>> > /dev/sd2p
>> > # dmesg | grep 6.7
>> > OpenBSD 6.7 (RAMDISK_CD) #177: Thu May  7 11:19:02 MDT 2020
>> > # dmesg | grep sd
>> >dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
>> > wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
>> > sd0 at scsibus1 targ 0 lun 0: 
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M5_
>> > sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > sd1 at scsibus1 targ 1 lun 0: 
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M7_
>> > sd1: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > wskbd0 at pckbd0: console keyboard, using wsdisplay1
>> > softraid0: trying to bring up sd2 degraded
>> > softraid0: sd2 was not shutdown properly
>> > softraid0: sd2 is offline, will not be brought online
>> > # bioctl -d sd2
>> > bioctl: Can't locate sd2 device via /dev/bio
>> > #
>> >
>> > I suspect a missing devices in /dev ( but it seems i have the required
>> one )
>> > and MAKEDEV all of course did a `uid 0 on /: out of inodes`
>> >
>> > I have backups but i ' d like to fix the issue !
>>
>> Hi Sven,
>>
>> The device sd2 wasn’t attached by softraid, your /dev/bio is fine.  This
>> can happen if softraid fails to find all component disks or the metadata on
>> one or more components does not match expectations (newer metadata seen on
>> other disks).  Make sure all of the component disks are working.  If that
>> is not the issue, you may need to re-run the command that you used to
>> create the array and include -C force.  Be very careful doing this, I
>> suggest running the command once without -C force to ensure it found all
>> the components and fails to bring the array up due to the same error
>> message you got (attempt to bring up degraded).
>>
>> If you’re not careful, you can blow out the whole array.
>>
>> -Brian
>>
>>
>> The disk looks fine, the disklabel is ok, the array is just sd0 and sda1
> both got the disklabel RAID part,
> shall i do further checks ?
>
> # bioctl -c 1 -l /dev/sd0a,/dev/sd1a softraid0
> softraid0: trying to bring up sd2 degraded
> softraid0: sd2 was not shutdown properly
> softraid0: sd2 is offline, will not be brought online
> softraid0: trying to bring up sd2 degraded
> softraid0: sd2 was not shutdown properly
> softraid0: sd2 is offline, will not be brought online
>
> I wouldnt like to blow the whole array ! sd0a should be in perfect
> condition but unsure about sd1a, i probably need to bioctl -R sd1
>
>
> Traditionally at this point, I would run the command again with -C force
> and my RAID 1 array is fine.  I might be doing dangerous things and not
> know, so other voices please chime in.
>
> [Moved to misc@]
>
>
>
>
# bioctl -C force -c 1 -l /dev/sd0a,/dev/sd1a softraid0
sd2 at scsibus2 targ 1 lun 0: 
sd2: 1907726MB, 512 bytes/sector, 3907023473 sectors
softraid0: RAID 1 volume attached as sd2

both volumes are online , partitions are visible
but fsck is not happy at all :-(

Can i do something before fsck -y ( i have backups )

-- 
--
-
Knowing is not enough; we must apply. Willing is not enough; we must do


Re: softraid/bioctl cant find device /dev/bio

2020-08-03 Thread Brian Brombacher



> On Aug 3, 2020, at 11:51 AM, sven falempin  wrote:
> 
> 
> 
> 
>> On Mon, Aug 3, 2020 at 11:38 AM Brian Brombacher  
>> wrote:
>> 
>> 
>> > On Aug 3, 2020, at 9:54 AM, sven falempin  wrote:
>> > 
>> > Hello
>> > 
>> > I saw a similar issue in the mailing list around decembre 2019,
>> > following an electrical problem softraid doesn't bring devices ups
>> > 
>> > 
>> > # ls /dev/sd??
>> > /dev/sd0a /dev/sd0g /dev/sd0m /dev/sd1c /dev/sd1i /dev/sd1o /dev/sd2e
>> > /dev/sd2k
>> > /dev/sd0b /dev/sd0h /dev/sd0n /dev/sd1d /dev/sd1j /dev/sd1p /dev/sd2f
>> > /dev/sd2l
>> > /dev/sd0c /dev/sd0i /dev/sd0o /dev/sd1e /dev/sd1k /dev/sd2a /dev/sd2g
>> > /dev/sd2m
>> > /dev/sd0d /dev/sd0j /dev/sd0p /dev/sd1f /dev/sd1l /dev/sd2b /dev/sd2h
>> > /dev/sd2n
>> > /dev/sd0e /dev/sd0k /dev/sd1a /dev/sd1g /dev/sd1m /dev/sd2c /dev/sd2i
>> > /dev/sd2o
>> > /dev/sd0f /dev/sd0l /dev/sd1b /dev/sd1h /dev/sd1n /dev/sd2d /dev/sd2j
>> > /dev/sd2p
>> > # dmesg | grep 6.7
>> > OpenBSD 6.7 (RAMDISK_CD) #177: Thu May  7 11:19:02 MDT 2020
>> > # dmesg | grep sd
>> >dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
>> > wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
>> > sd0 at scsibus1 targ 0 lun 0: 
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M5_
>> > sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > sd1 at scsibus1 targ 1 lun 0: 
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M7_
>> > sd1: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > wskbd0 at pckbd0: console keyboard, using wsdisplay1
>> > softraid0: trying to bring up sd2 degraded
>> > softraid0: sd2 was not shutdown properly
>> > softraid0: sd2 is offline, will not be brought online
>> > # bioctl -d sd2
>> > bioctl: Can't locate sd2 device via /dev/bio
>> > #
>> > 
>> > I suspect a missing devices in /dev ( but it seems i have the required one 
>> > )
>> > and MAKEDEV all of course did a `uid 0 on /: out of inodes`
>> > 
>> > I have backups but i ' d like to fix the issue !
>> 
>> Hi Sven,
>> 
>> The device sd2 wasn’t attached by softraid, your /dev/bio is fine.  This can 
>> happen if softraid fails to find all component disks or the metadata on one 
>> or more components does not match expectations (newer metadata seen on other 
>> disks).  Make sure all of the component disks are working.  If that is not 
>> the issue, you may need to re-run the command that you used to create the 
>> array and include -C force.  Be very careful doing this, I suggest running 
>> the command once without -C force to ensure it found all the components and 
>> fails to bring the array up due to the same error message you got (attempt 
>> to bring up degraded).
>> 
>> If you’re not careful, you can blow out the whole array.
>> 
>> -Brian
>> 
>> 
> The disk looks fine, the disklabel is ok, the array is just sd0 and sda1 both 
> got the disklabel RAID part,
> shall i do further checks ?
>  
> # bioctl -c 1 -l /dev/sd0a,/dev/sd1a softraid0
> softraid0: trying to bring up sd2 degraded
> softraid0: sd2 was not shutdown properly
> softraid0: sd2 is offline, will not be brought online
> softraid0: trying to bring up sd2 degraded
> softraid0: sd2 was not shutdown properly
> softraid0: sd2 is offline, will not be brought online
> 
> I wouldnt like to blow the whole array ! sd0a should be in perfect condition 
> but unsure about sd1a, i probably need to bioctl -R sd1 

Traditionally at this point, I would run the command again with -C force and my 
RAID 1 array is fine.  I might be doing dangerous things and not know, so other 
voices please chime in.

[Moved to misc@]





Re: softraid i/o errors, crypto blocks

2020-02-22 Thread freda_bundchen
>> plugged in and just run /sbin/bioctl -c C -l softraid0 
>> DUIDHERE.a on.  
> The last two arguments in that command are reversed. Fixing
> that should solve at least part of your problem.  

Thank you very much. I apologize, I did reverse the arguments in my
email. However, I was using them correctly in a script when I ran the
commands. To summarize the problem, after I mount one encrypted
(following the OpenBSD FAQ instructions) USB drive, it works fine,
until I mount a second encrypted USB drive. At that point I get errors
like

Feb 18 09:04:14 freda /bsd: softraid0: chunk sd4a already in use 
Feb 18 09:04:22 freda /bsd: softraid0: sd5: i/o error 0 @ CRYPTO
block 27xxx

It doesn't happen every time. I switched to new drives but the same
thing happened. I will just chalk it up to bad drives or cables for
now, and I'll send more complete error records if it happens again.



Re: softraid i/o errors, crypto blocks

2020-02-22 Thread Tim van der Molen
freda_bundc...@nym.hush.com (2020-02-18 10:13 -0600):
> I've had Postgresql data on an encrypted external USB drive 
> (encrypted via the OpenBSD FAQ instructions) for about a year
> and it's worked great. 
> 
> Recently, I started gettting dmesg messages
> saying softraid i/o error and it listed various crypto blocks:
> 
> Feb 18 09:04:14 freda /bsd: softraid0: chunk sd4a already in use
> Feb 18 09:04:22 freda /bsd: softraid0: sd5: i/o error 0 @ CRYPTO block 27xxx
> Feb 18 09:04:22 freda /bsd: softraid0: sd5: i/o error 0 @ CRYPTO block 6xx
> Feb 18 09:04:31 freda /bsd: softraid0: sd5: i/o error 0 @ CRYPTO block 
> 1624932xxx
> Feb 18 09:04:31 freda /bsd: softraid0: sd5: i/o error 0 @ CRYPTO block 
> 1624811xxx
> 
> In this case, it happened when I tried to mount a second external encrypted 
> drive.
> (I don't recall if this is what always triggers the problem.) 
> 
> My  drive with Postgresql running was sd5i. I always mount the drives with 
> the DUID
> after running bioctl. The sd4a above refers to RAID on the second encrypted 
> drive I had 
> plugged in and just run /sbin/bioctl -c C -l softraid0 DUIDHERE.a on.

The last two arguments in that command are reversed. Fixing that should
solve at least part of your problem.

> I'm running
> OpenBSD 6.6-current (GENERIC.MP) #648: Sun Feb 16 13:54:33 MST 2020
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> 
> Currently, I have Postgresql 12.1p1 but it happened when the previous external
> drive had 11.6 data also.
> 
> At this point of course I can no longer access my data. If I reboot then / 
> also fails
> to unmount. Rebooting is successful  though after filesystem checks. Next 
> time it happens
> I will take a picture of the messages.
> 
> I thought my external drive was bad so I switched to a new one, but the same 
> thing
> happened today.
> 
> So I am just wondering if anyone else has recently started experiencing this 
> sort
> of problem. I haven't lost any data since I backup early and often, and in 
> any case,
> fsck has fixed things so far. 



Re: softraid(4) RAID1 tools or experimental patches for consistency checking

2020-01-13 Thread Karel Gardas



Few missing notes to this email:

- my performance testing, results and conclusion were done only on 
mechanical drives (hitachi 7k500 and wd re 500) and only with meta-data 
intensive workload. Basically tar -xf src.tar; unmount and rm -rf src; 
unmount where src.tar was src.tar of stable at that time.


- at the same time WAPBL was submitted to tech@ and IIRC it increased 
perf a lot since I've also been using limited checksum blocks caching 
(not in patch, not submitted yet) and since WAPBL log is on constant 
place RAID1c/s was more happy.


- at the time I've not had any ssd/nvme for testing. Situation may be 
different with this especially once someone think what's tolerable and 
what's not anymore w.r.t. speed.


On 1/12/20 9:58 PM, Karel Gardas wrote:


Tried something like that in the past: 
https://marc.info/?l=openbsd-tech=144217941801350=2


It worked kind of OK except the performance. The problem is that data 
layout makes read op. -> 2x read op. and write op. -> read op. + 2x 
write op. which is not the speed winner. Caching of checksuming blocks 
helped in some cases a lot, but was not submitted since you would also 
ideally need readahead and this was not done at all. The other perf 
issue is that putting this slow virtual drive impl. under already slow 
ffs is a receipt for disappointment from the perf. point of view. 
Certainly no speed daemon and certainly completely different league 
than checkumming able fss from open-source world (ZFS, btrfs, 
bcachefs. No HAMMER2 is not there since it checksum only meta-data and 
not user data and can't self-heal).


Yes, you are right that ideally drive would be fs aware to optimize 
rebuild, but this may be worked around by more clever layout marking 
also used blocks. Anyway, that's (and above) are IMHO reasons why 
development is done on checksumming fss instead of checksumming 
software raids. Read somewhere paper about linux's mdadm hacked to do 
checksums and the result was pretty much the same (IIRC!). E.g. perf. 
disappointment. If you are curious, google for it.


So, work on it if you can tolerate the speed...





Re: softraid(4) RAID1 tools or experimental patches for consistency checking

2020-01-12 Thread Karel Gardas



Tried something like that in the past: 
https://marc.info/?l=openbsd-tech=144217941801350=2


It worked kind of OK except the performance. The problem is that data 
layout makes read op. -> 2x read op. and write op. -> read op. + 2x 
write op. which is not the speed winner. Caching of checksuming blocks 
helped in some cases a lot, but was not submitted since you would also 
ideally need readahead and this was not done at all. The other perf 
issue is that putting this slow virtual drive impl. under already slow 
ffs is a receipt for disappointment from the perf. point of view. 
Certainly no speed daemon and certainly completely different league than 
checkumming able fss from open-source world (ZFS, btrfs, bcachefs. No 
HAMMER2 is not there since it checksum only meta-data and not user data 
and can't self-heal).


Yes, you are right that ideally drive would be fs aware to optimize 
rebuild, but this may be worked around by more clever layout marking 
also used blocks. Anyway, that's (and above) are IMHO reasons why 
development is done on checksumming fss instead of checksumming software 
raids. Read somewhere paper about linux's mdadm hacked to do checksums 
and the result was pretty much the same (IIRC!). E.g. perf. 
disappointment. If you are curious, google for it.


So, work on it if you can tolerate the speed...

On 1/12/20 6:46 AM, Constantine A. Murenin wrote:

Dear misc@,

I'm curious if anyone has any sort of tools / patches to verify the consistency 
of softraid(4) RAID1 volumes?


If one adds a new disc (i.e. chunk) to a volume with the RAID1 discipline, the 
resilvering process of softraid(4) will read data from one of the existing 
discs, and write it back to all the discs, ridding you of the artefacts that 
could potentially be used to reconstruct the flipped bits correctly.

Additionally, this resilvering process is also really slow.  Per my notes from 
a few years ago, softraid has a fixed block size of 64KB (MAXPHYS); if we're 
talking about spindle-based HDDs, they only support like 80 random IOPS at 7,2k 
RPM, half of which we gotta use for reads, half for writes; this means it'll 
take (1TB/64KB/(80/s/2)) = 4,5 days to resilver each 1TB of an average 7,2k RPM 
HDD; compare this with sequential resilvering, which will take (1TB/120MB/s) = 
2,3 hours; the reality may vary from these imprecise calculations, but these 
numbers do seem representative of the experience.

The above behaviour is defined here:

http://bxr.su/o/sys/dev/softraid_raid1.c#sr_raid1_rw

369} else {
370/* writes go on all working disks */
371chunk = i;
372scp = sd->sd_vol.sv_chunks[chunk];
373switch (scp->src_meta.scm_status) {
374case BIOC_SDONLINE:
375case BIOC_SDSCRUB:
376case BIOC_SDREBUILD:
377break;
378
379case BIOC_SDHOTSPARE: /* should never happen */
380case BIOC_SDOFFLINE:
381continue;
382
383default:
384goto bad;
385}
386}


What we could do is something like the following, to pretend that any online 
volume is not available for writes when the wu (Work Unit) we're handling is 
part of the rebuild process from http://bxr.su/o/sys/dev/softraid.c#sr_rebuild, 
mimicking the BIOC_SDOFFLINE behaviour for BIOC_SDONLINE chunks (discs) when 
the SR_WUF_REBUILD flag is set for the workunit:

switch (scp->src_meta.scm_status) {
case BIOC_SDONLINE:
+   if (wu->swu_flags & SR_WUF_REBUILD)
+   continue;   /* must be same as 
BIOC_SDOFFLINE case */
+   /* FALLTHROUGH */
case BIOC_SDSCRUB:
case BIOC_SDREBUILD:


Obviously, there's both pros and cons to such an approach; I've tested a 
variation of the above in production (not a fan weeks-long random-read/write 
rebuilds); but use this at your own risk, obviously.

...

But back to the original problem, this consistency check would have to be 
file-system-specific, because we gotta know which blocks of softraid have and 
have not been used by the filesystem, as softraid itself is 
filesystem-agnostic.  I'd imagine it'll be somewhat similar in concept to the 
fstrim(8) utility on GNU/Linux -- 
http://man7.org/linux/man-pages/man8/fstrim.8.html -- and would also open the 
door for the cron-based TRIM support as well (it would also have to know the 
softraid format itself, too).  Any pointers or hints where to get started, or 
whether anyone has worked on this in the past?


Cheers,
Constantine.http://cm.su/





Re: Softraid data recovery

2019-10-18 Thread Steven Surdock
> -Original Message-
> From: Aaron Mason 
> Sent: Monday, October 14, 2019 7:13 PM
> To: Steven Surdock 
> Cc: misc@openbsd.org
> Subject: Re: Softraid data recovery
> 
> On Tue, Oct 15, 2019 at 7:34 AM Steven Surdock  net.com> wrote:
> >
...
> >
> > How can I recover as much data as possible off the failed RAID array.
> > If I recreate the array, "bioctl -c 1 -l /dev/wd0d,/dev/wd1d
> softraid0", will the existing data be preserved?
> >
...
Based on the information found here:  
https://marc.info/?l=openbsd-misc=136553269631163=2 I was successfully able 
to create a disk image off the failing drive.

$ dd if=/dev/wd0d of=raid.img conv=noerror,sync skip=528
$ vnconfig vnd0 raid.img
$ fsck /dev/vnd0a
$ fsck /dev/vnd0d
$ mount /dev/vnd0a /home/public
 



Re: Softraid data recovery

2019-10-16 Thread Steven Surdock
> -Original Message-
> From: Karel Gardas 
> Sent: Wednesday, October 16, 2019 11:26 AM
> To: Steven Surdock 
> Cc: misc@openbsd.org
> Subject: Re: Softraid data recovery
> 
> On 2019-10-15 13:44, Steven Surdock wrote:
> > Model Family: Western Digital Black
> > Device Model: WDC WD4001FAEX-00MJRA0
> > 196 Reallocated_Event_Count 0x0032   200   200   000Old_age
> Always   -   0
> > 197 Current_Pending_Sector  0x0032   200   200   000Old_age
> Always   -   9
> > 198 Offline_Uncorrectable   0x0030   200   200   000Old_age
> Offline  -   9
> > 199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age
> Always   -   0
> > 200 Multi_Zone_Error_Rate   0x0008   200   200   000Old_age
> Offline  -   9
> 
> Looks like 9 bad sectors which can't be remapped for whatever reason.
> UDMA_CRC error count is on 0, which looks like your SATA cable is fine.
> The drive is kind of strange since it still claim Raw read error rate to
> have on 0.
> 
> > Model Family: Western Digital Black
> > Device Model: WDC WD4003FZEX-00Z4SA0
> > Serial Number:WD-WMC5D0D50MLK
> > Vendor Specific SMART Attributes with Thresholds:
> > ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE
> >1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail
> Always   -   6
> > 196 Reallocated_Event_Count 0x0032   200   200   000Old_age
> Always   -   0
> > 197 Current_Pending_Sector  0x0032   200   200   000Old_age
> Always   -   0
> > 198 Offline_Uncorrectable   0x0030   200   200   000Old_age
> Offline  -   4
> > 199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age
> Always   -   0
> > 200 Multi_Zone_Error_Rate   0x0008   200   200   000Old_age
> Offline  -   6
> 
> Looks like 4 uncorrectable sectors while 6 raw read error happened.
> 
> You can attempt to run -t long  to learn more about your 2 drives
> (with -a following long test), but I still consider both drives happily
> dyeing.

Considered and working to replace.  I'm still working on recovering as much 
data as possible.  As noted, one partition is backups, but I had some scripts 
on there I did not backup.  Thanks.



Re: Softraid data recovery

2019-10-16 Thread Karel Gardas

On 2019-10-15 13:44, Steven Surdock wrote:

Model Family: Western Digital Black
Device Model: WDC WD4001FAEX-00MJRA0
196 Reallocated_Event_Count 0x0032   200   200   000Old_age   Always   
-   0
197 Current_Pending_Sector  0x0032   200   200   000Old_age   Always   
-   9
198 Offline_Uncorrectable   0x0030   200   200   000Old_age   Offline  
-   9
199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age   Always   
-   0
200 Multi_Zone_Error_Rate   0x0008   200   200   000Old_age   Offline  
-   9


Looks like 9 bad sectors which can't be remapped for whatever reason. 
UDMA_CRC error count is on 0, which looks like your SATA cable is fine. 
The drive is kind of strange since it still claim Raw read error rate to 
have on 0.



Model Family: Western Digital Black
Device Model: WDC WD4003FZEX-00Z4SA0
Serial Number:WD-WMC5D0D50MLK
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
WHEN_FAILED RAW_VALUE
   1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail  Always   
-   6
196 Reallocated_Event_Count 0x0032   200   200   000Old_age   Always   
-   0
197 Current_Pending_Sector  0x0032   200   200   000Old_age   Always   
-   0
198 Offline_Uncorrectable   0x0030   200   200   000Old_age   Offline  
-   4
199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age   Always   
-   0
200 Multi_Zone_Error_Rate   0x0008   200   200   000Old_age   Offline  
-   6


Looks like 4 uncorrectable sectors while 6 raw read error happened.

You can attempt to run -t long  to learn more about your 2 drives 
(with -a following long test), but I still consider both drives happily 
dyeing.




Re: Softraid data recovery

2019-10-15 Thread Steven Surdock
> -Original Message-
> From: Karel Gardas 
> Sent: Tuesday, October 15, 2019 5:31 AM
> To: Steven Surdock 
> Cc: misc@openbsd.org
> Subject: Re: Softraid data recovery
> 
> 
> 
> On 2019-10-15 04:26, Steven Surdock wrote:
> > I believe the disks are mostly healthy.
> 
> I seriously doubt that. What's the output from smartctl -a for both
> drives? I can't imagine why would you get failures on heave reads on one
> drive and then later failures on another one and yet it would not show
> in SMART info as some kind of error(s). Another possibility maybe your
> SATA cables just too old and fragile, but smartctl will tell that too.

root@host# smartctl -a /dev/wd0c
smartctl 7.0 2018-12-30 r4883 [i386-unknown-openbsd6.5] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Black
Device Model: WDC WD4001FAEX-00MJRA0
Serial Number:WD-WCC131134311
LU WWN Device Id: 5 0014ee 2090b4beb
Firmware Version: 01.01L01
User Capacity:4,000,787,030,016 bytes [4.00 TB]
Sector Size:  512 bytes logical/physical
Device is:In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:Tue Oct 15 07:40:39 2019 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (   0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection:(46080) seconds.
Offline data collection
capabilities:(0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off 
support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:(   2) minutes.
Extended self-test routine
recommended polling time:( 497) minutes.
Conveyance self-test routine
recommended polling time:(   5) minutes.
SCT capabilities:  (0x70b5) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail  Always   
-   0
  3 Spin_Up_Time0x0027   151   151   021Pre-fail  Always   
-   11425
  4 Start_Stop_Count0x0032   100   100   000Old_age   Always   
-   24
  5 Reallocated_Sector_Ct   0x0033   200   200   140Pre-fail  Always   
-   0
  7 Seek_Error_Rate 0x002e   200   200   000Old_age   Always   
-   0
  9 Power_On_Hours  0x0032   030   030   000Old_age   Always   
-   51197
 10 Spin_Retry_Count0x0032   100   253   000Old_age   Always   
-   0
 11 Calibration_Retry_Count 0x0032   100   253   000Old_age   Always   
-   0
 12 Power_Cycle_Count   0x0032   100   100   000Old_age   Always   
-   24
192 Power-Off_Retract_Count 0x0032   200   200   000Old_age   Always   
-   12
193 Load_Cycle_Count0x0032   200   200   000Old_age   Always   
-   13
194 Temperature_Celsius 0x0022   104   100   000Old_age   Always   
-   48
196 Reallocated_Event_Count 0x0032   200   200   000Old_age   Always   
-   0
197 Current_Pending_Sector  0x0032   200   200   000Old_age   Always   
-   9
198 Offline_Uncorrectable   0x0030 

Re: Softraid data recovery

2019-10-15 Thread Karel Gardas




On 2019-10-15 04:26, Steven Surdock wrote:

I believe the disks are mostly healthy.


I seriously doubt that. What's the output from smartctl -a for both 
drives? I can't imagine why would you get failures on heave reads on one 
drive and then later failures on another one and yet it would not show 
in SMART info as some kind of error(s). Another possibility maybe your 
SATA cables just too old and fragile, but smartctl will tell that too.




Re: Softraid data recovery

2019-10-14 Thread Patrick Dohman


> On Oct 14, 2019, at 3:04 PM, Steven Surdock wrote:
> 
> root@host# more /var/backups/disklabel.sd1.backup
> # /dev/rsd1c:
> type: SCSI
> disk: SCSI disk
> label: SR RAID 1
> duid: 8ec2330eabf7cd26
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 486401
> total sectors: 7814036576
> boundstart: 64
> boundend: 7814036576
> drivedata: 0
> 
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>  a:   2147488704   64  4.2BSD   8192 65536 1 # 
> /home/public/
>  c:   78140365760  unused
>  d:   5666547712   2147488768  4.2BSD   8192 65536 1 # 
> /home/Backups/
> 


A combination of revised partition lettering & a custom fstab may allow for 
mounting of the partitions without a software device.

For example:

$cat /etc/fstab
/dev/wd0a  /home ffs rw,nodev,nosuid 1 2
/dev/wd0d  /home/Backups/ ffs rw,nodev,nosuid 1 2

The device naming may take some massaging to work...
man fstab & disklabel for more info.

Regards
Patrick



Re: Softraid data recovery

2019-10-14 Thread Steven Surdock
> -Original Message-
> From: Aaron Mason 
> Sent: Monday, October 14, 2019 7:13 PM
> To: Steven Surdock 
> Cc: misc@openbsd.org
> Subject: Re: Softraid data recovery
> 
> On Tue, Oct 15, 2019 at 7:34 AM Steven Surdock  net.com> wrote:
> >
> > I have a simple RAID1 configuration on wd0, wd1.  I was in the process
> of performing a rebuild on wd1, as it failed during some heavy reads.
> During the rebuild wd0 went into a failure state.  After some
> troubleshooting I decided to reboot and now my RAID disk, sd1, is
> unavailable.  Disks wd0 and wd1 don't show any errors, but I have a
> replacement disk.  I have backups for the critical data and I'd like to
> try and recover as much recent data as possible.  My thought was to
> create a disk image of the "/home/public" data and mount it using
> vnconfig, but I seem to be having issues with the appropriate 'dd'
> command to do that.
> >
> > How can I recover as much data as possible off the failed RAID array.
> > If I recreate the array, "bioctl -c 1 -l /dev/wd0d,/dev/wd1d
> softraid0", will the existing data be preserved?
> >
> > root@host# disklabel wd0
> > # /dev/rwd0c:
> > type: ESDI
> > disk: ESDI/IDE disk
> > label: WDC WD4001FAEX-0
> > duid: acce36f25df51c8c
> > flags:
> > bytes/sector: 512
> > sectors/track: 63
> > tracks/cylinder: 255
> > sectors/cylinder: 16065
> > cylinders: 486401
> > total sectors: 7814037168
> > boundstart: 64
> > boundend: 4294961685
> > drivedata: 0
> >
> > 16 partitions:
> > #size   offset  fstype [fsize bsize   cpg]
> >   c:   78140371680  unused
> >   d:   7814037104   64RAID
> >
> > root@host# more /var/backups/disklabel.sd1.backup # /dev/rsd1c:
> > type: SCSI
> > disk: SCSI disk
> > label: SR RAID 1
> > duid: 8ec2330eabf7cd26
> > flags:
> > bytes/sector: 512
> > sectors/track: 63
> > tracks/cylinder: 255
> > sectors/cylinder: 16065
> > cylinders: 486401
> > total sectors: 7814036576
> > boundstart: 64
> > boundend: 7814036576
> > drivedata: 0
> >
> > 16 partitions:
> > #size   offset  fstype [fsize bsize   cpg]
> >   a:   2147488704   64  4.2BSD   8192 65536 1 #
> /home/public/
> >   c:   78140365760  unused
> >   d:   5666547712   2147488768  4.2BSD   8192 65536 1 #
> /home/Backups/
> >
> 
> I think at this point you're far better off restoring from backup.
> You do have a backup, right?
> 
> As for the disks, ddrescue would be a better option than dd - it'll keep
> trying if it encounters another URE whereas dd will up and quit.
> Expect it to take several days on disks that big - it's designed to be
> gentle to dying disks.

I believe the disks are mostly healthy.  In fact I've tried several attempts at 
dd'ing the data from wd0 with no read issues.  It takes about 12 hours to read 
1TB.  I suspect I'm not aligning sectors properly and the filesystem is not 
readable.  I've tried making an image of /home/public (which is _mostly_ backed 
up), but fsck doesn't see a reasonable filesystem after I vnconfig the image.  
So, if anyone has some insight on 'dd if=/dev/wd0d of=public.img bs=512 
count=5666547712 skip=xx', it would be great.



Re: Softraid data recovery

2019-10-14 Thread Aaron Mason
On Tue, Oct 15, 2019 at 7:34 AM Steven Surdock
 wrote:
>
> I have a simple RAID1 configuration on wd0, wd1.  I was in the process of 
> performing a rebuild on wd1, as it failed during some heavy reads.  During 
> the rebuild wd0 went into a failure state.  After some troubleshooting I 
> decided to reboot and now my RAID disk, sd1, is unavailable.  Disks wd0 and 
> wd1 don't show any errors, but I have a replacement disk.  I have backups for 
> the critical data and I'd like to try and recover as much recent data as 
> possible.  My thought was to create a disk image of the "/home/public" data 
> and mount it using vnconfig, but I seem to be having issues with the 
> appropriate 'dd' command to do that.
>
> How can I recover as much data as possible off the failed RAID array.
> If I recreate the array, "bioctl -c 1 -l /dev/wd0d,/dev/wd1d softraid0", will 
> the existing data be preserved?
>
> root@host# disklabel wd0
> # /dev/rwd0c:
> type: ESDI
> disk: ESDI/IDE disk
> label: WDC WD4001FAEX-0
> duid: acce36f25df51c8c
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 486401
> total sectors: 7814037168
> boundstart: 64
> boundend: 4294961685
> drivedata: 0
>
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>   c:   78140371680  unused
>   d:   7814037104   64RAID
>
> root@host# more /var/backups/disklabel.sd1.backup
> # /dev/rsd1c:
> type: SCSI
> disk: SCSI disk
> label: SR RAID 1
> duid: 8ec2330eabf7cd26
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 486401
> total sectors: 7814036576
> boundstart: 64
> boundend: 7814036576
> drivedata: 0
>
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>   a:   2147488704   64  4.2BSD   8192 65536 1 # 
> /home/public/
>   c:   78140365760  unused
>   d:   5666547712   2147488768  4.2BSD   8192 65536 1 # 
> /home/Backups/
>

I think at this point you're far better off restoring from backup.
You do have a backup, right?

As for the disks, ddrescue would be a better option than dd - it'll
keep trying if it encounters another URE whereas dd will up and quit.
Expect it to take several days on disks that big - it's designed to be
gentle to dying disks.

-- 
Aaron Mason - Programmer, open source addict
I've taken my software vows - for beta or for worse



Re: softraid: Derivation of a keydisk based on a passphrase

2018-08-15 Thread Jacqueline Jolicoeur
> My understanding is that a softraid passphrase is a seed from which bioctl 
> creates a key.
> 
> I currently use a passphrase to decrypt my CRYPTO volume.
> (How) can I create a corresponding keydisk that will decrypt the same volume?

I believe it is mutually exclusive. You can either use a passphrase or a 
keydisk but not both.

https://www.openbsd.org/faq/faq14.html#softraidFDE



Re: SoftRAID disk size

2018-05-01 Thread Håkon Robbestad Gylterud

> I can imagine you have some old softraid metadata lying around. Did you
> try to wipe any existing metadata from the RAID partitions before
> attaching it?
> 


Thanks! That did the trick.

Best regards,
 —Håkon



Re: SoftRAID disk size

2018-05-01 Thread Alexander Hall
On Tue, May 01, 2018 at 02:36:49PM +0200, Håkon Robbestad Gylterud wrote:
> Hi,
> 
> I have two 5TB disks, which I want to set up as mirrored using RAID 1
> through softraid(4). But after attaching the disk using bioctl(8), the
> disk appears with 2TB, not 5TB.

I can imagine you have some old softraid metadata lying around. Did you
try to wipe any existing metadata from the RAID partitions before
attaching it?

# dd if=/dev/zero of=/dev/rwd0A bs=1m count=10
# dd if=/dev/zero of=/dev/rwd0B bs=1m count=10

should be more than enough. Spelling mistakes are intentional.

I DO NOT RECOMMEND THIS if you already have some stuff on there which
you care about.

# bioctl -C force ...

could also help.

/Alexander


> 
> How can I get the correct size for the softraid device?
> 
> The disks are wd0 and wd2, and disklabel shows:
> 
> # /dev/rwd0c:
> type: ESDI
> disk: ESDI/IDE disk
> label: TOSHIBA HDWE150
> duid: f76030f4c8b1cf43
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 608001
> total sectors: 9767541168
> boundstart: 64
> boundend: 9767541168
> drivedata: 0
> 
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>   a:   9767541104   64RAID
>   c:   97675411680  unused
> 
> # /dev/rwd2c:
> type: ESDI
> disk: ESDI/IDE disk
> label: TOSHIBA HDWE150
> duid: 635ad6956b23ea1d
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 608001
> total sectors: 9767541168
> boundstart: 64
> boundend: 9767541168
> drivedata: 0
> 
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>   a:   9767541104   64RAID
>   c:   97675411680  unused
> 
> 
> But when I attach using:
> 
> # bioctl -c 1 -l /dev/wd0a,/dev/wd2a softraid0
> softraid0: RAID 1 volume attached as sd0
> 
> dmesg shows:
> 
> sd0 at scsibus3 targ 1 lun 0:  SCSI2 0/direct fixed
> sd0: 2097148MB, 512 bytes/sector, 4294961093 sectors
> 
> Thanks in advance to any pointers in the right direction.
> 
> Best regards,
>  —Håkon
> 
> 
> 



Re: softraid crypto with keydisk and password

2017-10-10 Thread Stefan Sperling
On Tue, Oct 10, 2017 at 11:13:45PM +1100, tomr wrote:
> Well... there's nothing in the FAQ about using a keydisk at all, and
> there's no hints in bioctl(8) about using both a keydisk and a password
> together.

That's because using both isn't a supported use case yet.
In the current design and implementation, there's either a passphrase
or a keydisk, but never both.

> The last comment on this thread describes what I'd like to do, which is
> to somehow have a keydisk *and* a passphrase:
> https://undeadly.org/cgi?action=article=20131112031806

Please understand that I don't have any interest in supporting such hacks.
If you use them and they work for you, that's fine of course.

I'd rather see a patch that makes this feature a proper part of the design
and implementation. I don't need this feature. But if you write a patch
to implement it properly, I will review your patch.



Re: softraid crypto with keydisk and password

2017-10-10 Thread tomr


On 09/28/17 17:58, Stefan Sperling wrote:
> On Thu, Sep 28, 2017 at 04:15:20AM +0200, Erling Westenvik wrote:
>> On Thu, Sep 28, 2017 at 09:11:49AM +1000, tomr wrote:
>>> I remember seeing a post, I think on undeadly.org, which went through
>>> having the bootloader on password-encrypted usb drive, that also
>>> contains a keyfile for the main disk. It said something like "I also
>>> wanted the laptop to appear broken, and the disk full of random data, if
>>> the usb drive wasn't present - rather than stopping at a password prompt"
>>
>> Here you go:
>>
>> http://www.undeadly.org/cgi?action=article=20110530221728
> 
> Hi, I am the author of this undeadly article.
> It is now very old and full of outdated information.
> 
> Follow this FAQ section instead:
> http://www.openbsd.org/faq/faq14.html#softraid

Well... there's nothing in the FAQ about using a keydisk at all, and
there's no hints in bioctl(8) about using both a keydisk and a password
together.

The last comment on this thread describes what I'd like to do, which is
to somehow have a keydisk *and* a passphrase:
https://undeadly.org/cgi?action=article=20131112031806



Re: softraid crypto with keydisk and password

2017-09-28 Thread Stefan Sperling
On Thu, Sep 28, 2017 at 04:15:20AM +0200, Erling Westenvik wrote:
> On Thu, Sep 28, 2017 at 09:11:49AM +1000, tomr wrote:
> > I remember seeing a post, I think on undeadly.org, which went through
> > having the bootloader on password-encrypted usb drive, that also
> > contains a keyfile for the main disk. It said something like "I also
> > wanted the laptop to appear broken, and the disk full of random data, if
> > the usb drive wasn't present - rather than stopping at a password prompt"
> 
> Here you go:
> 
> http://www.undeadly.org/cgi?action=article=20110530221728

Hi, I am the author of this undeadly article.
It is now very old and full of outdated information.

Follow this FAQ section instead:
http://www.openbsd.org/faq/faq14.html#softraid



Re: softraid crypto with keydisk and password

2017-09-27 Thread Erling Westenvik
On Thu, Sep 28, 2017 at 09:11:49AM +1000, tomr wrote:
> I remember seeing a post, I think on undeadly.org, which went through
> having the bootloader on password-encrypted usb drive, that also
> contains a keyfile for the main disk. It said something like "I also
> wanted the laptop to appear broken, and the disk full of random data, if
> the usb drive wasn't present - rather than stopping at a password prompt"

Here you go:

http://www.undeadly.org/cgi?action=article=20110530221728

Cheers,
Erling

>
> There's something similar in the comments here from @mcbride
> https://undeadly.org/cgi?action=article=20131112031806
>
> But now an hour or so of searching fails to turn it up. Could anyone
> share some clues on how to go about this?

--
Erling Westenvik



Re: softraid crypto seem really slower than plain ffs

2017-09-18 Thread bofh
On Mon, Sep 18, 2017 at 11:30 AM, Joel Carnat  wrote:

> Hello,
>
> I was really annoyed by the numbers I got. So I did the testings again.
> Using
> a brand new VM. Being really careful on what I was doing and writing it
> down
> after each command run. I did the testings using 6.1 and 6.2-current, in
> case
> there were some changes. There weren't.
>
> First of all. There isn't 10x difference between PLAIN and ENCRYPTED.
> I believe I have mixed numbers from my various testings. I also believe
> Cloud
> providers don't/can't guarantee throughput on disk. I noticed variations
> from
> 1 to 4 on the same VM between 2 days... whatever the OS was.
>

This is why you *NEVER* run benchmark tests for OS on VMs.  Unless you are
benchmarking the VM system, it's basically worthless.

You spent a lot of time and effort - but the results are pretty much
useless because it cannot be duplicated.


Re: softraid crypto seem really slower than plain ffs

2017-09-18 Thread Joel Carnat

Hello,

I was really annoyed by the numbers I got. So I did the testings again. 
Using
a brand new VM. Being really careful on what I was doing and writing it 
down
after each command run. I did the testings using 6.1 and 6.2-current, in 
case

there were some changes. There weren't.

First of all. There isn't 10x difference between PLAIN and ENCRYPTED.
I believe I have mixed numbers from my various testings. I also believe 
Cloud
providers don't/can't guarantee throughput on disk. I noticed variations 
from

1 to 4 on the same VM between 2 days... whatever the OS was.

In the end, there only seem to be a 1.5 factor difference between PLAIN 
and
ENCRYPTED. And according to iostat, what happens is that when writing on 
the
encrypted partition (sd1a), io already happen on the plain partition 
(sd0a).

# disklabel sd0
(...)
  a: 52420031   64RAID
  c: 524288000  unused
# disklabel sd1
(...)
  a: 48194944  4209056  4.2BSD   2048 16384 12958 # /
  b:  4208966   64swap# none
  c: 524195030  unused
# iostat -w 1 sd0 sd1
  tty  sd0   sd1 cpu
 tin tout  KB/t  t/s  MB/s   KB/t  t/s  MB/s  us ni sy in id
   0   61 16.00 5180 80.94  16.00 5180 80.94   1  0 91  8  0
   0  184 16.00 4594 71.78  16.00 4594 71.78   0  0 95  5  0
   0   61 16.00 5126 80.09  16.00 5126 80.09   1  0 95  4  0
   0   61 16.00 5014 78.34  16.00 5012 78.31   0  0 94  6  0
(...)

Regards.

Le 18/09/2017 09:40, Stefan Sperling a écrit :

On Sun, Sep 17, 2017 at 07:32:49PM +0100, Kevin Chadwick wrote:

I'm not a developer but I know 6.1 moved to a shiny new side channel
resistant AES. I seem to remember Theo saying that if it is that slow
then even worse; people won't use encryption at all and if they need
side channel resistance then they could get a processor with AES-NI
etc.. Not sure if it was reverted in the end or not.


It was reverted.




Re: softraid crypto seem really slower than plain ffs

2017-09-18 Thread Stefan Sperling
On Sun, Sep 17, 2017 at 07:32:49PM +0100, Kevin Chadwick wrote:
> I'm not a developer but I know 6.1 moved to a shiny new side channel 
> resistant AES. I seem to remember Theo saying that if it is that slow
> then even worse; people won't use encryption at all and if they need
> side channel resistance then they could get a processor with AES-NI
> etc.. Not sure if it was reverted in the end or not.
 
It was reverted.



Re: softraid crypto seem really slower than plain ffs

2017-09-17 Thread Kevin Chadwick
On Fri, 15 Sep 2017 12:24:32 +0200


> I noticed that there were a huge difference between 
> plain and encrypted filesystem using OpenBSD.

I'm not a developer but I know 6.1 moved to a shiny new side channel 
resistant AES. I seem to remember Theo saying that if it is that slow
then even worse; people won't use encryption at all and if they need
side channel resistance then they could get a processor with AES-NI
etc.. Not sure if it was reverted in the end or not. I guess it was
decided upon whether a use case including a side channel attack existed
and could catch a user out, but I have no idea.



Re: softraid crypto seem really slower than plain ffs

2017-09-15 Thread Hiltjo Posthuma
On Fri, Sep 15, 2017 at 12:24:32PM +0200, Joel Carnat wrote:
> Hi,
> 
> Initially comparing I/O speed between FreeBSD/ZFS/GELI and
> OpenBSD/FFS/CRYPTO, I noticed that there were a huge difference between
> plain and encrypted filesystem using OpenBSD. I ran the test on a 1
> vCore/1GB RAM Vultr VPS, running OpenBSD 6.2-beta. I had / configured in
> plain FFS and /home encrypted using bioctl(8). Then I ran a few `dd` and
> `bonnie++`
> 
> According to those tests, writing FFS/CRYPTO is about 10 times slower than
> FFS/PLAIN.
> For the record, using the same `dd` on FreeBSD, ZFS with GELI is only 2
> times slower than plain ZFS.
> Furthemore, comparing FreeBSD/ZFS/PLAIN and OpenBSD/FFS/PLAIN, the speed is
> about the same.
> Finally, it seems reading OpenBSD/FFS/PLAIN and OpenBSD/FFS/CRYPTO is done
> at the same speed.
> 
> Is this expected to have so much difference between FFS/PLAIN and FFS/CRYPTO
> when writing data?
> 
> TIA,
>   Jo
> 
> PS: here's my test data.
> 
> # sysctl kern.version hw.machine hw.model hw.ncpu hw.physmem
> kern.version=OpenBSD 6.2-beta (GENERIC) #91: Wed Sep 13 22:05:17 MDT 2017
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC
> 
> hw.machine=amd64
> hw.model=Virtual CPU a7769a6388d5
> hw.ncpu=1
> hw.physmem=1056817152
> 
> # disklabel sd0
> # /dev/rsd0c:
> type: SCSI
> disk: SCSI disk
> label: Block Device
> duid: 69939b6a66c3879a
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 3263
> total sectors: 52428800
> boundstart: 64
> boundend: 52420095
> drivedata: 0
> 
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>   a: 16739680 35680384  4.2BSD   2048 16384 12958 # /
>   b:  4208966   64swap# none
>   c: 524288000  unused
>   d: 31471335  4209030RAID
> 
> # disklabel sd1
> # /dev/rsd1c:
> type: SCSI
> disk: SCSI disk
> label: SR CRYPTO
> duid: 4179a9e67beb3d4e
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 1958
> total sectors: 31470807
> boundstart: 64
> boundend: 31455270
> drivedata: 0
> 
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>   c: 314708070  unused
>   e:   273024   64  4.2BSD   2048 16384  2133 # /etc
>   h: 31182176   273088  4.2BSD   2048 16384 12958 # /home
> 
> # mount
> /dev/sd0a on / type ffs (local, wxallowed)
> /dev/sd1e on /etc type ffs (local, softdep)
> /dev/sd1h on /home type ffs (local, nodev, nosuid)
> 
> # df -h
> Filesystem SizeUsed   Avail Capacity  Mounted on
> /dev/sd0a  7.9G915M6.6G12%/
> /dev/sd1e  131M4.9M120M 4%/etc
> /dev/sd1h 14.6G2.0K   13.9G 0%/home
> 
> # sync && time dd if=/dev/zero of=/TEST bs=512 count=300 && sync
> 300+0 records in
> 300+0 records out
> 153600 bytes transferred in 8.567 secs (179278802 bytes/sec)
> 0m08.61s real 0m00.29s user 0m07.70s system
> 
> # sync && time dd if=/dev/zero of=/home/TEST bs=512 count=300 && sync
> 300+0 records in
> 300+0 records out
> 153600 bytes transferred in 20.875 secs (73580525 bytes/sec)
> 0m20.88s real 0m00.42s user 0m05.54s system
> 
> # sync && time dd if=/dev/zero of=/TEST bs=4k count=30 && sync
> 30+0 records in
> 30+0 records out
> 122880 bytes transferred in 4.151 secs (296024071 bytes/sec)
> 0m04.19s real 0m00.04s user 0m04.01s system
> 
> # sync && time dd if=/dev/zero of=/home/TEST bs=4k count=30 && sync
> 30+0 records in
> 30+0 records out
> 122880 bytes transferred in 22.872 secs (53723676 bytes/sec)
> 0m22.95s real 0m00.06s user 0m01.89s system
> 

NOTE: a block size is 1024 bytes, so the counts are incorrect in the conversion.

the count should be 375000: 4096/512=8, 30/8=375000.

my write numbers are:
run 1

+ dd if=/dev/zero of=/home/TEST bs=512 count=240
240+0 records in
240+0 records out
122880 bytes transferred in 8.616 secs (142611817 bytes/sec)
0m09.33s real 0m00.20s user 0m09.05s system

+ dd if=/dev/zero of=/home/TEST bs=4k count=30
30+0 records in
30+0 records out
122880 bytes transferred in 5.591 secs (219749191 bytes/sec)
0m05.59s real 0m00.02s user 0m05.46s system

4k, 8k, 16k, 32k and 64k are comparable on my machine.


run 2

+ dd if=/dev/zero of=/home/TEST bs=512 count=240
240+0 records in
240+0 records out
122880 bytes transferred in 8.748 secs (140451506 bytes/sec)
0m09.24s real 0m00.26s user 0m08.87s system

+ dd if=/dev/zero of=/home/TEST bs=4k count=30
30+0 records in
30+0 records out
122880 bytes transferred in 5.140 secs (239049708 bytes/sec)
0m05.87s real 0m00.03s user 0m05.74s 

Re: softraid mirror & large drives (3T)

2017-04-18 Thread Karel Gardas
On Tue, Apr 18, 2017 at 7:02 PM, Ian Watts  wrote:
> Thanks for the feedback, Karel, Allan, and Kamil.  The motivation is
> long-term data storage reliability.  For example, my wife creates
> graphical books, which involves large files, plus other work and
> personal files.
>

so kind of SOHO NAS?

> Having a mirror is not terribly important, so doing a nightly sync to
> another machine is possible.

IMHO mirror is a nice to have and if you combine this with rsync to
backup in case of any changes, then even better.

> Since it's been mentioned, what SATA RAID controller cards are
> recommended for OpenBSD on i386?  I wonder if they would fit my budget.

Not in SATA RAID business, rather prefer simple HBA + softraid/ZFS
(not on OpenBSD) and since I'm using only SR-RAID1, then board's
number of SATA connectors is usually good enough.

But using i386/openbsd on AMD E2-3200 is IMHO pure waste of precious
CPU resources you do have at your disposal. I'd recommend to go with
amd64/openbsd on this.

> Has the "supported hardware" page been removed from the openbsd.org

I would start with man mpi/mpii/ami or so...

Karel



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Ian Watts
Thanks for the feedback, Karel, Allan, and Kamil.  The motivation is 
long-term data storage reliability.  For example, my wife creates 
graphical books, which involves large files, plus other work and 
personal files.  

Having a mirror is not terribly important, so doing a nightly sync to 
another machine is possible.

Since it's been mentioned, what SATA RAID controller cards are 
recommended for OpenBSD on i386?  I wonder if they would fit my budget.  
Has the "supported hardware" page been removed from the openbsd.org 
website?  I only found such a page here:
http://openbsd.das.ufsc.br/i386.html#hardware


Thanks,

-- Ian

P.S., Karel, many Americans confuse loose/lose.  :)


On Tue, 18 Apr 2017, Karel Gardas wrote:

> loose -> lose. Sorry not native English speaker here.
> 
> On Tue, Apr 18, 2017 at 6:09 PM, Karel Gardas  wrote:
> > How much data can you loose on this mirror? The rebuild time is long
> > and the chance of another drive dying is higher during rebuild so I
> > would consider either increasing redundancy to 3-way mirror or
> > decreasing time between backups. All depending on how much data you
> > can loose when something goes wrong.
> 
> 



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Karel Gardas
loose -> lose. Sorry not native English speaker here.

On Tue, Apr 18, 2017 at 6:09 PM, Karel Gardas  wrote:
> How much data can you loose on this mirror? The rebuild time is long
> and the chance of another drive dying is higher during rebuild so I
> would consider either increasing redundancy to 3-way mirror or
> decreasing time between backups. All depending on how much data you
> can loose when something goes wrong.



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Karel Gardas
On Tue, Apr 18, 2017 at 1:56 AM, Ian Watts  wrote:
> After 17 hours it is 24% complete, so it'll be about three
> days to complete.  The system is:

How much data can you loose on this mirror? The rebuild time is long
and the chance of another drive dying is higher during rebuild so I
would consider either increasing redundancy to 3-way mirror or
decreasing time between backups. All depending on how much data you
can loose when something goes wrong.



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Stuart Henderson
On 2017-04-18, Allan Streib  wrote:
> Ian Watts  writes:
>
>> With this much disk space, should I be looking at another way of
>> achieving data redundancy?
>
> Buy a hardware RAID controller.

I'd sooner have decent software RAID with disks spread across multiple
controllers..




Re: softraid mirror & large drives (3T)

2017-04-18 Thread trondd
On Tue, April 18, 2017 8:48 am, Kamil CholewiÅ*ski wrote:
> On Tue, 18 Apr 2017, Jiri B  wrote:
>> On Tue, Apr 18, 2017 at 08:23:56AM -0400, Allan Streib wrote:
>>> Buy a hardware RAID controller.
>>
>> I suppose you wanted to write - 'buy two equal hardware RAID
>> controllers',
>> or how would you be solving problem in broken hw raid controller in
>> cca 10 yrs from now? :-)
>>
>> j.
>
> Redundant machines in isolated failure zones.
>
> <3,K.
>

Woah.  Hold on.  There is a difference between backup and availability.

Copying your data to remote locations is part of backup.  RAID is for
availability (with integritry possibly included) but is not backup.

I initially read the original post as availability but maybe I am wrong. 
What is the desired goal?  What is the usage?  Personal or business?



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Kamil Cholewiński
On Tue, 18 Apr 2017, Jiri B  wrote:
> On Tue, Apr 18, 2017 at 08:23:56AM -0400, Allan Streib wrote:
>> Buy a hardware RAID controller.
>
> I suppose you wanted to write - 'buy two equal hardware RAID controllers',
> or how would you be solving problem in broken hw raid controller in
> cca 10 yrs from now? :-)
>
> j.

Redundant machines in isolated failure zones.

<3,K.



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Jiri B
On Tue, Apr 18, 2017 at 08:23:56AM -0400, Allan Streib wrote:
> Ian Watts  writes:
> 
> > With this much disk space, should I be looking at another way of
> > achieving data redundancy?
> 
> Buy a hardware RAID controller.

I suppose you wanted to write - 'buy two equal hardware RAID controllers',
or how would you be solving problem in broken hw raid controller in
cca 10 yrs from now? :-)

j.



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Allan Streib
Ian Watts  writes:

> With this much disk space, should I be looking at another way of
> achieving data redundancy?

Buy a hardware RAID controller.

Allan



Re: softraid mirror & large drives (3T)

2017-04-18 Thread Nick Holland
On 04/17/17 19:56, Ian Watts wrote:
> Hello,
> 
> I'm planning on replacing an old fileserver that has a single 1T drive 
> with something a little newer having 3T of space.  I have two 3T drives 
> and have installed OpenBSD 6.0 to both as a softraid mirror.  Works well 
> and I simulated a drive failure by shutting it down, removing a drive, 
> and rebooting.  The drive has been re-installed and it is now rebuilding 
> the mirror.  After 17 hours it is 24% complete, so it'll be about three 
> days to complete.  The system is:
> 
> AMD E2-3200 2.40 GHz
> 4G RAM
> 2 x 3T Seagate Barracuda 7200rpm SATA 
> 
> With this much disk space, should I be looking at another way of 
> achieving data redundancy?  The goal is to increase redundancy of the 
> data and the mirror would be periodically backed up to another server in 
> a different building.  My only concern here is the suitability of the 
> softraid mirror for a large filesystem.  I've thought of using the 
> second drive as a backup and rsync'ing it nightly, but then failure of 
> the primary drive would mean more downtime before it's operational 
> again.  A long rebuild time isn't a major problem; just want to make 
> sure I'm not overlooking a more sensible option.
> 
> FWIW, I used the following info to get set up:
> 
> https://www.openbsd.org/faq/faq14.html#softraidDI
> http://openbsd-archive.7691.n7.nabble.com/Large-3TB-HDD-support-td95308.html
> 
> Thanks,
> 
> -- Ian

Keep in mind, it's easy to say and now trivial to buy "3TB disks", and
therefore, it's easy to forget that it is a SNOOTLOAD of data.  Three
days to mirror 3TB isn't out of line for some HW mirroring systems I've
worked with, and much faster than many.

Still...verify that you are running with an ahci(4) controller (sd(4)
disks), not a pciide(4) controller (wd(4) disks) (though at one point, I
don't think it was possible to have wd(4) disks that big, not sure if
that's still true.  And I suspect if you were running wd(4), it might be
weeks, not days).

And yes, when you have a three TB of data and a three day rebuild
period, the possibility of a second disk failure during rebuild is
definitely not zero, so yes, I'd suggest *considering* some alternative
ways to achieve data security.
* Three disk RAID1?  (a REALLY good idea)
* Checksumming "static" files?
* rsync'ing data between stand-alone disks?  (IF you can restrict the
amount of data you have to have rsync look at, you can sync a LOT of
data very quickly)
* "Chunk" (or partition) your data as best you can, so you can mount
blocks of storage Read Only, as "full and unchanging" (note lack of
questionmark -- you want to do this if at all possible) (chunk your
data, but NOT your RAID partitions -- last thing you want to get stuck
doing is remirroring multiple RAID partitions on one disk at the same time!)
* Something else relevant to your situation?

Nick.



Re: softraid & GPT configuration.

2017-03-05 Thread Christian Weisgerber
On 2017-03-03, Eric Huiban  wrote:

> bioctl needs a mandatory bootable partition to act correctly even on 
> disks not aimed to be bootable.

I find that very surprising.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: softraid & GPT configuration.

2017-03-04 Thread Stuart Henderson
On 2017-03-03, Eric Huiban  wrote:
> i just performed some remote connection... recreating GPT with an .i EFI 
> boot partition. The softraid is now 2.7TiB... Grumbl! conclusion : 
> bioctl needs a mandatory bootable partition to act correctly even on 
> disks not aimed to be bootable.

Too late now but you don't need GPT for this if it's OpenBSD-only,
just do 'b' '*' in disklabel.



Re: softraid & GPT configuration.

2017-03-03 Thread Eric Huiban

Eric Huiban wrote:

Stefan Sperling wrote:

On Fri, Mar 03, 2017 at 01:27:20PM +0100, Eric Huiban wrote:

Hello,

I should have miss something in the man pages with softraid and
bioctl. But
i want to form a RAID 1 between two 3TB harddisk (2.7TiB) and it is
acting
like 2TiB MBR disks with OpenBSD 6.0.

fdisk -ig sd1 is OK.


Did you also use the -b option?

The FAQ now lists the steps for EFI setups:
http://www.openbsd.org/faq/faq14.html#softraid
Did you see this already?



I saw that but i dismissed it since i do not need a EFI boot for my
"data container" disks. (aside of the container disks i've got a system
disk with traditional image & rsync backups).

Anyway i did not find "anywhere" the mention that a softraid 1 needs to
be based on a bootable units. As well as "huge" discs management topic
seemed to me not so covered. Such idea of mandatory boot partition
looked weird to me... i just posted an open question on the list.

Eric.




i just performed some remote connection... recreating GPT with an .i EFI 
boot partition. The softraid is now 2.7TiB... Grumbl! conclusion : 
bioctl needs a mandatory bootable partition to act correctly even on 
disks not aimed to be bootable.


Sorry for the noise.
Eric.



Re: softraid & GPT configuration.

2017-03-03 Thread Eric Huiban

Stefan Sperling wrote:

On Fri, Mar 03, 2017 at 01:27:20PM +0100, Eric Huiban wrote:

Hello,

I should have miss something in the man pages with softraid and bioctl. But
i want to form a RAID 1 between two 3TB harddisk (2.7TiB) and it is acting
like 2TiB MBR disks with OpenBSD 6.0.

fdisk -ig sd1 is OK.


Did you also use the -b option?

The FAQ now lists the steps for EFI setups:
http://www.openbsd.org/faq/faq14.html#softraid
Did you see this already?



I saw that but i dismissed it since i do not need a EFI boot for my 
"data container" disks. (aside of the container disks i've got a system 
disk with traditional image & rsync backups).


Anyway i did not find "anywhere" the mention that a softraid 1 needs to 
be based on a bootable units. As well as "huge" discs management topic 
seemed to me not so covered. Such idea of mandatory boot partition 
looked weird to me... i just posted an open question on the list.


Eric.



Re: softraid & GPT configuration.

2017-03-03 Thread Stefan Sperling
On Fri, Mar 03, 2017 at 01:27:20PM +0100, Eric Huiban wrote:
> Hello,
> 
> I should have miss something in the man pages with softraid and bioctl. But
> i want to form a RAID 1 between two 3TB harddisk (2.7TiB) and it is acting
> like 2TiB MBR disks with OpenBSD 6.0.
> 
> fdisk -ig sd1 is OK.

Did you also use the -b option?

The FAQ now lists the steps for EFI setups:
http://www.openbsd.org/faq/faq14.html#softraid
Did you see this already?



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread lists
> I'm taking the plunge now.

You're done with the swings.



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread Nick Holland
On 11/16/16 11:52, Ax0n wrote:
> I'm taking the plunge now. Mostly, I was concerned about SSD longevity and
> if TRIM would be a problem due to the different way data is going to be
> accessed. It was the cheapest drive I could find locally anyway, and I keep
> good backups (dump to a much larger external drive that's also using
> softraid crypto) so I suppose if it burns up in a year it's not really that
> big of a problem.

Make good backups, and if it burns up in a year (which it may or may not
do regardless of what SSD-specific bullsh*t you do with it), say
"thanks!" and go buy yourself one twice as big, twice as fast and half
as expensive (and possibly more reliable).

If it doesn't fail in a year or two, I suggest removing the SSD and a
wool carpet and rubber shoes, or better yet, just look panicked, tell
your significant other it failed and hope they don't look to closely,
and rush out to buy the upgrade.  The panicked look is important, though.

Nick.



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread lists
Wed, 16 Nov 2016 19:10:08 +0100 ludovic coues 
> Trim and ssd longevity and what not may have been an issue when ssd where a
> novelty.
> These day, it should last just as long as an hard drive. So make backups if
> what matters and don't worry about your disk.

Hi Ludovic,

You have to face it, the issue is both is the SSD medium and controller.

I give it a decade of backed up cached usage, in lieu of actual storage.

It is an industry shame point solid state tech is beaten by mechanicals.

Kind regards,
Anton



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread ludovic coues
Trim and ssd longevity and what not may have been an issue when ssd where a
novelty.
These day, it should last just as long as an hard drive. So make backups if
what matters and don't worry about your disk.

On 16 Nov 2016 5:54 p.m., "Ax0n"  wrote:

> I'm taking the plunge now. Mostly, I was concerned about SSD longevity and
> if TRIM would be a problem due to the different way data is going to be
> accessed. It was the cheapest drive I could find locally anyway, and I keep
> good backups (dump to a much larger external drive that's also using
> softraid crypto) so I suppose if it burns up in a year it's not really that
> big of a problem.
>
> On Wed, Nov 16, 2016 at 10:33 AM, Marc Peters  wrote:
>
> > Am 11/16/16 um 17:07 schrieb Ax0n:
> > > I'm less concerned about swap, and more concerned about how a fully
> > > encrypted softraid Solid State Disk is going to act. I can't find a lot
> > > about FDE on SSD.
> > >
> >
> > It acts as a normal harddisk would, just faster :). I had one in my
> > worklaptop i used before for about two years and i have one in my
> > worklaptop. No problems.



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread Ax0n
I'm taking the plunge now. Mostly, I was concerned about SSD longevity and
if TRIM would be a problem due to the different way data is going to be
accessed. It was the cheapest drive I could find locally anyway, and I keep
good backups (dump to a much larger external drive that's also using
softraid crypto) so I suppose if it burns up in a year it's not really that
big of a problem.

On Wed, Nov 16, 2016 at 10:33 AM, Marc Peters  wrote:

> Am 11/16/16 um 17:07 schrieb Ax0n:
> > I'm less concerned about swap, and more concerned about how a fully
> > encrypted softraid Solid State Disk is going to act. I can't find a lot
> > about FDE on SSD.
> >
>
> It acts as a normal harddisk would, just faster :). I had one in my
> worklaptop i used before for about two years and i have one in my
> worklaptop. No problems.



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread Marc Peters
Am 11/16/16 um 17:07 schrieb Ax0n:
> I'm less concerned about swap, and more concerned about how a fully
> encrypted softraid Solid State Disk is going to act. I can't find a lot
> about FDE on SSD.
> 

It acts as a normal harddisk would, just faster :). I had one in my
worklaptop i used before for about two years and i have one in my
worklaptop. No problems.



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread Ax0n
I'm less concerned about swap, and more concerned about how a fully
encrypted softraid Solid State Disk is going to act. I can't find a lot
about FDE on SSD.

On Wed, Nov 16, 2016 at 9:41 AM, trondd  wrote:

> On Wed, November 16, 2016 10:23 am, Jiri B wrote:
> > On Wed, Nov 16, 2016 at 09:14:51AM -0600, Ax0n wrote:
> >> I just purchased a SanDisk SSD for my daily-driver laptop which has been
> >> running -CURRENT well. I'm considering going with FDE and a fresh
> >> snapshot
> >> install, adding my packages then copying over what I need from my old
> >> spinning rust drive, mostly /home and the ssh host keys from /etc/ssh.
> >>
> >> Anything I should look out for? To be honest, this is my first
> >> experience
> >> installing anything onto an SSD so I'd be welcome to accept any pointers
> >> specific to OpenBSD. Searching misc@ for as long as I've been
> subscribed
> >> hasn't yielded any solid input on this.
> >
> > Not sure if encrypting swap makes still sense if you already have FDE.
> > What's recommended in this context?
> >
> > j.
> >
>
> It's been discussed previously.  Relavant comment from the thread:
>
> http://marc.info/?l=openbsd-misc=143206067713324=2
>
> And hint, you can search an online archive instead of being limited to
> searching "for as long as you've been subscribed" :)



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread Stefan Sperling
On Wed, Nov 16, 2016 at 10:23:39AM -0500, Jiri B wrote:
> On Wed, Nov 16, 2016 at 09:14:51AM -0600, Ax0n wrote:
> > I just purchased a SanDisk SSD for my daily-driver laptop which has been
> > running -CURRENT well. I'm considering going with FDE and a fresh snapshot
> > install, adding my packages then copying over what I need from my old
> > spinning rust drive, mostly /home and the ssh host keys from /etc/ssh.
> > 
> > Anything I should look out for? To be honest, this is my first experience
> > installing anything onto an SSD so I'd be welcome to accept any pointers
> > specific to OpenBSD. Searching misc@ for as long as I've been subscribed
> > hasn't yielded any solid input on this.
> 
> Not sure if encrypting swap makes still sense if you already have FDE.
> What's recommended in this context?
> 
> j.
> 

I always leave swap crypt enabled anyway. Less hassle, and one more layer
for an attacker to poke through for finding leftover bits of data from RAM.



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread trondd
On Wed, November 16, 2016 10:23 am, Jiri B wrote:
> On Wed, Nov 16, 2016 at 09:14:51AM -0600, Ax0n wrote:
>> I just purchased a SanDisk SSD for my daily-driver laptop which has been
>> running -CURRENT well. I'm considering going with FDE and a fresh
>> snapshot
>> install, adding my packages then copying over what I need from my old
>> spinning rust drive, mostly /home and the ssh host keys from /etc/ssh.
>>
>> Anything I should look out for? To be honest, this is my first
>> experience
>> installing anything onto an SSD so I'd be welcome to accept any pointers
>> specific to OpenBSD. Searching misc@ for as long as I've been subscribed
>> hasn't yielded any solid input on this.
>
> Not sure if encrypting swap makes still sense if you already have FDE.
> What's recommended in this context?
>
> j.
>

It's been discussed previously.  Relavant comment from the thread:

http://marc.info/?l=openbsd-misc=143206067713324=2

And hint, you can search an online archive instead of being limited to
searching "for as long as you've been subscribed" :)



Re: softraid(4) full-disk encryption on SSD

2016-11-16 Thread Jiri B
On Wed, Nov 16, 2016 at 09:14:51AM -0600, Ax0n wrote:
> I just purchased a SanDisk SSD for my daily-driver laptop which has been
> running -CURRENT well. I'm considering going with FDE and a fresh snapshot
> install, adding my packages then copying over what I need from my old
> spinning rust drive, mostly /home and the ssh host keys from /etc/ssh.
> 
> Anything I should look out for? To be honest, this is my first experience
> installing anything onto an SSD so I'd be welcome to accept any pointers
> specific to OpenBSD. Searching misc@ for as long as I've been subscribed
> hasn't yielded any solid input on this.

Not sure if encrypting swap makes still sense if you already have FDE.
What's recommended in this context?

j.



Re: softraid crypto performance on Sun Fire T1000

2016-11-07 Thread Alexander Bochmann
Hi,

...on Sat, Oct 29, 2016 at 03:06:05PM +0200, Jonathan Schleifer wrote:

 > While a single core of the T1000 is quite slow, this just seems too slow,
 > making this setup unusable. openssl speed shows 10 MB/s for AES-128-CBC and 7
 > MB/s for AES-256-CBC on a single core. So a single core is definitely capable
 > of more than just 2 MB/s. While even 10 MB/s is still slow for today, it's

A long time ago, compiler flags made a hell of a difference 
for openssl on sparc64 (and I assume that kernel crypto might 
behave in a similar way)...

I don't know about the current defaults in OpenBSD/sparc64, 
but for a T1 cpu, you could try rebuilding the kernel with 
something like "-mcpu=v9 -mtune=niagara" in mk.conf COPTS, 
and check if you see an improvement.

You'll be on your own with any problems though - custom 
compiler otimizations for the system are generally frowned 
upon :)

Alex.



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Jonathan Schleifer
> Uhm, but the dd command wasn't :-) (the guest's root disk is sd2, not
sd0...)
>
> Now our numbers align much better:
>
> # dd if=/dev/rsd2c of=/dev/null bs=10m count=50
> 50+0 records in
> 50+0 records out
> 524288000 bytes transferred in 131.796 secs (3978008 bytes/sec)

Ah, thanks. I was just about to destroy my RAID-1 and see it that makes a
difference :).

So, the difference is pretty much the kernel locking: The fewer cores, the
better the performance.

But this still means that the softraid crypto performance is way below what
openssl speed gives. I wonder if openssl (well, libressl) is just using a more
efficient AES implementation, possibly one with inline assembly. Time to look
at sources :).

> For reference, the guest's raw disk read speed was:
>
> # dd if=/dev/rsd0c of=/dev/null bs=10m count=50
> 50+0 records in
> 50+0 records out
> 524288000 bytes transferred in 11.481 secs (45663843 bytes/sec)
> # dd if=/dev/rsd0c of=/dev/null bs=10m count=500
> 500+0 records in
> 500+0 records out
> 524288 bytes transferred in 128.997 secs (40643390 bytes/sec)

Yup, that matches mine. Which is still way below what the HD should be able to
get. But, as said, with bsd.sp I get 80 MB/s, which seems closer to what it
should be.

Thanks for your help in debugging!

--
Jonathan



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Stefan Sperling
On Sat, Oct 29, 2016 at 07:53:06PM +0200, Stefan Sperling wrote:
> > Are you sure that LDOM was indeed using softraid crypto?
> 
> Yes.

Uhm, but the dd command wasn't :-) (the guest's root disk is sd2, not sd0...)

Now our numbers align much better:

# dd if=/dev/rsd2c of=/dev/null bs=10m count=50
50+0 records in
50+0 records out
524288000 bytes transferred in 131.796 secs (3978008 bytes/sec)

For reference, the guest's raw disk read speed was:

# dd if=/dev/rsd0c of=/dev/null bs=10m count=50
50+0 records in
50+0 records out
524288000 bytes transferred in 11.481 secs (45663843 bytes/sec)
# dd if=/dev/rsd0c of=/dev/null bs=10m count=500
500+0 records in
500+0 records out
524288 bytes transferred in 128.997 secs (40643390 bytes/sec)



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Stefan Sperling
On Sat, Oct 29, 2016 at 07:39:29PM +0200, Jonathan Schleifer wrote:
> > I have the 1GHz version with 4 cores (32 threads).
> 
> Ok, so same per-core speed, so single-threaded performance should be the same.
> (Btw, you have 8 cores, not 4. 8 cores @ 4 threads each.)
> 
> > Otherwise it's probably similar to yours.
> > It's running 6.0 at the moment, yes. Some guests are running -current.
> 
> Was the guest in which you ran softraid crypto -current or 6.0?

Until I get around to upgrading it, this guest is running

OpenBSD 6.0 (GENERIC.MP) #1160: Sat Jul 16 02:47:56 MDT 2016
dera...@sparc64.openbsd.org:/usr/src/sys/arch/sparc64/compile/GENERIC.MP

> Your dmesg looks similar to mine, except for a few differences:

The dmesg I posted is old. I copied it out of my mail archives.
This system was first installed with some snapshot between 5.2
and 5.3 releases, I believe.

It looks like the biggest difference is a single 3.5" disk vs hardware
RAID with two 2.5" disks (and the RAID still being built?).

When two or more guests perform heavy I/O, my system gets very slow.
I hope that SSDs will help here since they cope better with random access.
Perhaps you should investigate that option, too.

> Are you sure that LDOM was indeed using softraid crypto?

Yes.



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Jonathan Schleifer
> I have the 1GHz version with 4 cores (32 threads).

Ok, so same per-core speed, so single-threaded performance should be the
same.
(Btw, you have 8 cores, not 4. 8 cores @ 4 threads each.)

> Otherwise it's probably similar to yours.
> It's running 6.0 at the moment, yes. Some guests are running -current.

Was the guest in which you ran softraid crypto -current or 6.0?

> Here's a dmesg from a few years ago which I copied before LDOMs were
configured.
> With guests configured the host dmesg changes since it has fewer resources.
>
> real mem = 8455716864 (8064MB)
> avail mem = 8304115712 (7919MB)
> mainbus0 at root: SPARC Enterprise T1000
> cpu0 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu1 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu2 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu3 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu4 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu5 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu6 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu7 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu8 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu9 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu10 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu11 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu12 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu13 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu14 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu15 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu16 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu17 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu18 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu19 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu20 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu21 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu22 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu23 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu24 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu25 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu26 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu27 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu28 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu29 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu30 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> cpu31 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
> vbus0 at mainbus0
> "flashprom" at vbus0 not configured
> cbus0 at vbus0
> vldc0 at cbus0
> vldcp0 at vldc0 chan 0x0: ivec 0x200, 0x201 channel "hvctl"
> "ldom-primary" at vldc0 chan 0x1 not configured
> "fmactl" at vldc0 chan 0x3 not configured
> vldc1 at cbus0
> "ldmfma" at vldc1 chan 0x4 not configured
> vldc2 at cbus0
> vldcp1 at vldc2 chan 0x14: ivec 0x228, 0x229 channel "spds"
> "system-management" at vldc2 chan 0xd not configured
> vcons0 at vbus0: ivec 0x111, console
> vrtc0 at vbus0
> "fma" at vbus0 not configured
> "sunvts" at vbus0 not configured
> "sunmc" at vbus0 not configured
> "explorer" at vbus0 not configured
> "led" at vbus0 not configured
> "flashupdate" at vbus0 not configured
> "ncp" at vbus0 not configured
> vpci0 at mainbus0: bus 2 to 2, dvma map 8000-
> pci0 at vpci0
> ebus0 at mainbus0
> com0 at ebus0 addr c2c000-c2c007 ivec 0xa: st16650, 32 byte fifo
> vpci1 at mainbus0: bus 2 to 4, dvma map 8000-
> pci1 at vpci1
> ppb0 at pci1 dev 0 function 0 "ServerWorks PCIE-PCIX" rev 0xb3
> pci2 at ppb0 bus 3
> bge0 at pci2 dev 4 function 0 "Broadcom BCM5714" rev 0xa2, BCM5715 A1
(0x9001): ivec 0x7d4, address 00:14:4f:ae:b5:28
> brgphy0 at bge0 phy 1: BCM5714 10/100/1000baseT/SX PHY, rev. 0
> bge1 at pci2 dev 4 function 1 "Broadcom BCM5714" rev 0xa2, BCM5715 A1
(0x9001): ivec 0x7d5, address 00:14:4f:ae:b5:29
> brgphy1 at bge1 phy 1: BCM5714 10/100/1000baseT/SX PHY, rev. 0
> ppb1 at pci2 dev 8 function 0 "ServerWorks HT-1000 PCIX" rev 0xb3
> pci3 at ppb1 bus 4
> bge2 at pci3 dev 1 function 0 "Broadcom BCM5704C" rev 0x10, BCM5704 B0
(0x2100): ivec 0x7c2, address 00:14:4f:ae:b5:2a
> brgphy2 at bge2 phy 1: BCM5704 10/100/1000baseT PHY, rev. 0
> bge3 at pci3 dev 1 function 1 "Broadcom BCM5704C" rev 0x10, BCM5704 B0
(0x2100): ivec 0x7c1, address 00:14:4f:ae:b5:2b
> brgphy3 at bge3 phy 1: BCM5704 10/100/1000baseT PHY, rev. 0
> mpi0 at pci3 dev 2 function 0 "Symbios Logic SAS1064" rev 0x02: msi
> scsibus0 at mpi0: 63 targets
> sd0 at scsibus0 targ 0 lun 0:  SCSI3 0/direct
fixed naa.5000cca20ec9a366
> sd0: 476940MB, 512 bytes/sector, 976773168 sectors
> vscsi0 at root
> scsibus1 at vscsi0: 256 targets
> softraid0 at root
> scsibus2 at softraid0: 256 targets
> bootpath: /pci@7c0,0/pci@0,0/pci@8,0/scsi@2,0/disk@0,0
> root on sd0a (c29f49f8ceac7c2e.a) swap on sd0b dump on sd0b

Your dmesg looks similar to 

Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Stefan Sperling
On Sat, Oct 29, 2016 at 06:57:00PM +0200, Jonathan Schleifer wrote:
> Oh, wow, these are *much* better than what I get. Which CPU do you have? I 
> have 6x 1 GHz (meaning 24 threads). Are you running 6.0?
> 
> Thank you for these numbers, they make me much more hopeful about this 
> machine.

I have the 1GHz version with 4 cores (32 threads).
Otherwise it's probably similar to yours.
It's running 6.0 at the moment, yes. Some guests are running -current.

Here's a dmesg from a few years ago which I copied before LDOMs were configured.
With guests configured the host dmesg changes since it has fewer resources.

real mem = 8455716864 (8064MB)
avail mem = 8304115712 (7919MB)
mainbus0 at root: SPARC Enterprise T1000
cpu0 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu1 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu2 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu3 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu4 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu5 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu6 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu7 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu8 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu9 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu10 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu11 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu12 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu13 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu14 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu15 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu16 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu17 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu18 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu19 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu20 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu21 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu22 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu23 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu24 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu25 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu26 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu27 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu28 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu29 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu30 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
cpu31 at mainbus0: SUNW,UltraSPARC-T1 (rev 0.0) @ 1000 MHz
vbus0 at mainbus0
"flashprom" at vbus0 not configured
cbus0 at vbus0
vldc0 at cbus0
vldcp0 at vldc0 chan 0x0: ivec 0x200, 0x201 channel "hvctl"
"ldom-primary" at vldc0 chan 0x1 not configured
"fmactl" at vldc0 chan 0x3 not configured
vldc1 at cbus0
"ldmfma" at vldc1 chan 0x4 not configured
vldc2 at cbus0
vldcp1 at vldc2 chan 0x14: ivec 0x228, 0x229 channel "spds"
"system-management" at vldc2 chan 0xd not configured
vcons0 at vbus0: ivec 0x111, console
vrtc0 at vbus0
"fma" at vbus0 not configured
"sunvts" at vbus0 not configured
"sunmc" at vbus0 not configured
"explorer" at vbus0 not configured
"led" at vbus0 not configured
"flashupdate" at vbus0 not configured
"ncp" at vbus0 not configured
vpci0 at mainbus0: bus 2 to 2, dvma map 8000-
pci0 at vpci0
ebus0 at mainbus0
com0 at ebus0 addr c2c000-c2c007 ivec 0xa: st16650, 32 byte fifo
vpci1 at mainbus0: bus 2 to 4, dvma map 8000-
pci1 at vpci1
ppb0 at pci1 dev 0 function 0 "ServerWorks PCIE-PCIX" rev 0xb3
pci2 at ppb0 bus 3
bge0 at pci2 dev 4 function 0 "Broadcom BCM5714" rev 0xa2, BCM5715 A1 (0x9001): 
ivec 0x7d4, address 00:14:4f:ae:b5:28
brgphy0 at bge0 phy 1: BCM5714 10/100/1000baseT/SX PHY, rev. 0
bge1 at pci2 dev 4 function 1 "Broadcom BCM5714" rev 0xa2, BCM5715 A1 (0x9001): 
ivec 0x7d5, address 00:14:4f:ae:b5:29
brgphy1 at bge1 phy 1: BCM5714 10/100/1000baseT/SX PHY, rev. 0
ppb1 at pci2 dev 8 function 0 "ServerWorks HT-1000 PCIX" rev 0xb3
pci3 at ppb1 bus 4
bge2 at pci3 dev 1 function 0 "Broadcom BCM5704C" rev 0x10, BCM5704 B0 
(0x2100): ivec 0x7c2, address 00:14:4f:ae:b5:2a
brgphy2 at bge2 phy 1: BCM5704 10/100/1000baseT PHY, rev. 0
bge3 at pci3 dev 1 function 1 "Broadcom BCM5704C" rev 0x10, BCM5704 B0 
(0x2100): ivec 0x7c1, address 00:14:4f:ae:b5:2b
brgphy3 at bge3 phy 1: BCM5704 10/100/1000baseT PHY, rev. 0
mpi0 at pci3 dev 2 function 0 "Symbios Logic SAS1064" rev 0x02: msi
scsibus0 at mpi0: 63 targets
sd0 at scsibus0 targ 0 lun 0:  SCSI3 0/direct 
fixed naa.5000cca20ec9a366
sd0: 476940MB, 512 bytes/sector, 976773168 sectors
vscsi0 at root
scsibus1 at vscsi0: 256 targets
softraid0 at root
scsibus2 at softraid0: 256 targets
bootpath: /pci@7c0,0/pci@0,0/pci@8,0/scsi@2,0/disk@0,0
root on sd0a (c29f49f8ceac7c2e.a) swap on sd0b dump on sd0b



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Jonathan Schleifer
Am 29.10.2016 um 18:34 schrieb Stefan Sperling :

> On Sat, Oct 29, 2016 at 06:08:37PM +0200, Jonathan Schleifer wrote:
>> Hm, my main problem seems to be that whenever I decrypt something from the
>> disk, all other 23 cores seem to get stalled.
>>
>> So, would you recommend doing the following then:
>>
>> * Have a partition for the main system on a softraid crypto
>> * Have an unencrypted partition for the LDOMs
>> * Do softraid crypto in every LDOM
>
> I don't care about encrypting the host. It has no secrets.
> Some of my guests boot from softraid crypto disks (see 'man boot_sparc64').

Yeah, that's what I'm doing on the host. My main reason for encrypting the
host was that this does not allow leaking something (and giving some
authentication, so only the bootloader could be changed - I was actually
considering writing some FORTH to check it before loading it).

> On the host (single 3.5" SAS disk which came with the system, no softraid):
>
> # dd if=/dev/rsd0c of=/dev/null bs=10m count=50
> 50+0 records in
> 50+0 records out
> 524288000 bytes transferred in 8.658 secs (60551625 bytes/sec)
> # dd if=/dev/rsd0c of=/dev/null bs=10m count=500
> 500+0 records in
> 500+0 records out
> 524288 bytes transferred in 83.572 secs (62734555 bytes/sec)
> # sysctl hw.ncpu
> hw.ncpu=2
>
> In a guest which uses softraid crypto as its root disk:
>
> # dd if=/dev/rsd0c of=/dev/null bs=10m count=50
> 50+0 records in
> 50+0 records out
> 524288000 bytes transferred in 11.481 secs (45663843 bytes/sec)
> # dd if=/dev/rsd0c of=/dev/null bs=10m count=500
> 500+0 records in
> 500+0 records out
> 524288 bytes transferred in 128.997 secs (40643390 bytes/sec)
> # sysctl hw.ncpu
> hw.ncpu=2

Oh, wow, these are *much* better than what I get. Which CPU do you have? I
have 6x 1 GHz (meaning 24 threads). Are you running 6.0?

Thank you for these numbers, they make me much more hopeful about this
machine.

--
Jonathan



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Stefan Sperling
On Sat, Oct 29, 2016 at 06:08:37PM +0200, Jonathan Schleifer wrote:
> Hm, my main problem seems to be that whenever I decrypt something from the
> disk, all other 23 cores seem to get stalled.
> 
> So, would you recommend doing the following then:
> 
> * Have a partition for the main system on a softraid crypto
> * Have an unencrypted partition for the LDOMs
> * Do softraid crypto in every LDOM

I don't care about encrypting the host. It has no secrets.
Some of my guests boot from softraid crypto disks (see 'man boot_sparc64').

> Just out of curiosity, what read performance do you get in one of the LDOMs
> where you do use softraid crypto? 2 MB/s just seems too low, IMHO, when
> openssl speed can reach 5 times that.

On the host (single 3.5" SAS disk which came with the system, no softraid):

# dd if=/dev/rsd0c of=/dev/null bs=10m count=50 
50+0 records in
50+0 records out
524288000 bytes transferred in 8.658 secs (60551625 bytes/sec)
# dd if=/dev/rsd0c of=/dev/null bs=10m count=500 
500+0 records in
500+0 records out
524288 bytes transferred in 83.572 secs (62734555 bytes/sec)
# sysctl hw.ncpu  
hw.ncpu=2

In a guest which uses softraid crypto as its root disk:

# dd if=/dev/rsd0c of=/dev/null bs=10m count=50
50+0 records in
50+0 records out
524288000 bytes transferred in 11.481 secs (45663843 bytes/sec)
# dd if=/dev/rsd0c of=/dev/null bs=10m count=500
500+0 records in
500+0 records out
524288 bytes transferred in 128.997 secs (40643390 bytes/sec)
# sysctl hw.ncpu  
hw.ncpu=2

Guest disk image files were pre-allocated with dd from /dev/zero.



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Jonathan Schleifer
Hi,

> I run a T1000 which is segregated into a couple of LDOM guests (about 10).
> Some of the guests use softraid crypto inside. The host does not.

Yeah, I was planning on using LDOMs as well. However, since I wanted to put
this into a datacenter (for a cheap price, so it not being the most current
hardware is OK), I wanted to encrypt as much as possible.

> Disk i/o is slow across the board, and when one guest upgrades (extracts
> sets) or makes builds, the other guests experience slowed down i/o as well.

Without the crypto, I get 30 - 40 MB/s. Granted, that's still slow for 10k RPM
drives. However, the RAID 1 is still rebuilding, so that might be why.

> It's fine for network-bound tasks but I would not run a database or mail
> server on it, if that's what you were planning to use this box for.

I was planning to use it for a small mail server (only my mail + my family's
mail), a web server, a git server and and XMPP server. So I guess that would
be a bad idea?

> In the near term I am planning to replace the disks with SSDs to see if
> that helps. This requires the 2.5" disk frame which isn't easy to track
> down, or some self-built disk frame which holds a 2.5" disk.

I guess I'm lucky then, as mine came with two 2.5" SAS disks.

> I've never used this system with just the host. The OpenBSD kernel is
> mostly giant-locked so having many CPUs for one kernel doesn't make sense.

Yeah, that was exactly my fear: The GKL stalling the entire systems. Which
seems to be what's happening here.

> If you're running your tests without any LDOM guests configured, your
> system is probably using something in the order of 32 CPUs. In which case
> reducing the number of CPUs on the host by assigning some CPUs to guests
> might help a bit. See 'man ldomctl' for details about setting up guests.

Hm, my main problem seems to be that whenever I decrypt something from the
disk, all other 23 cores seem to get stalled.

So, would you recommend doing the following then:

* Have a partition for the main system on a softraid crypto
* Have an unencrypted partition for the LDOMs
* Do softraid crypto in every LDOM

That would at least mean one system reading from disk will not stall all the
other systems.

Just out of curiosity, what read performance do you get in one of the LDOMs
where you do use softraid crypto? 2 MB/s just seems too low, IMHO, when
openssl speed can reach 5 times that.

--
Jonathan



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Stefan Sperling
On Sat, Oct 29, 2016 at 05:12:51PM +0200, Jonathan Schleifer wrote:
> Another thing I noticed:
> 
> When running dd if=/dev/zero of=foo bs=65536, my SSH connection gets extremely
> laggy. If I open 4 more in parallel, all go down to KB/s of writes, and SSH
> becomes unusable. Now unusable as in things need forever to start. Unusable as
> in I press a key and it takes forever to print that letter.
> 
> Is this supposed to be like this, or is here something seriously wrong? Is
> OpenBSD only using a single core for the kernel and thus once that core is
> busy, the entire system starts to become unusable slow?

I run a T1000 which is segregated into a couple of LDOM guests (about 10).
Some of the guests use softraid crypto inside. The host does not.

Disk i/o is slow across the board, and when one guest upgrades (extracts
sets) or makes builds, the other guests experience slowed down i/o as well.

It's fine for network-bound tasks but I would not run a database or mail
server on it, if that's what you were planning to use this box for.
In the near term I am planning to replace the disks with SSDs to see if
that helps. This requires the 2.5" disk frame which isn't easy to track
down, or some self-built disk frame which holds a 2.5" disk.

I've never used this system with just the host. The OpenBSD kernel is
mostly giant-locked so having many CPUs for one kernel doesn't make sense.
If you're running your tests without any LDOM guests configured, your
system is probably using something in the order of 32 CPUs. In which case
reducing the number of CPUs on the host by assigning some CPUs to guests
might help a bit. See 'man ldomctl' for details about setting up guests.



Re: softraid crypto performance on Sun Fire T1000

2016-10-29 Thread Jonathan Schleifer
Another thing I noticed:

When running dd if=/dev/zero of=foo bs=65536, my SSH connection gets extremely
laggy. If I open 4 more in parallel, all go down to KB/s of writes, and SSH
becomes unusable. Now unusable as in things need forever to start. Unusable as
in I press a key and it takes forever to print that letter.

Is this supposed to be like this, or is here something seriously wrong? Is
OpenBSD only using a single core for the kernel and thus once that core is
busy, the entire system starts to become unusable slow?

--
Jonathan



heap full Re: Softraid Keydisk reboot loop

2015-12-27 Thread Thomas Bohl
Am 26.12.2015 um 23:18 schrieb Alexander Hall:
> On Sat, Dec 26, 2015 at 10:41:34PM +0100, Thomas Bohl wrote:
>> Hello,
>>
>> I updated from 5.8-stabel to current today. (First just an update, than
>> because of the problem a fresh installation.) On 5.8-stabel I had a
>> working softraid boot setup with a USB-Stick as keydisk.
>>
>> Now, if the keydisk is plugged in, the machine resets over and over
>> again. Unfortunately there is noting shown on screen to present here.
>> When the bootloader should show up there is just a beep sound (like when
>> the machine is power on) and than the BIOS comes again.
> 
> I'd say it seems your system is trying to boot off the keydisk. Make sure
> fdisk shows no flagged partition, or remote the flag by
> 
> fdisk:*1> flag 3
> Partition 3 marked active.
> fdisk:*1> flag 3 0
> Partition 3 flag value set to 0x0.
> 
> By then, 'p' should show no partition with an asterisk before it.
> 
> /Alexander

Thanks. Unfortunately that didn't do the trick.

I was able to get more information by reducing the number of harddisks
and taking video :-).

One disk:
System boots normally

Two disks:
booting sr0a:/bsd: 6823756heap full (0x9fba0+16384)
Screenshot http://s30.postimg.org/894owvh41/image.jpg
System resets

Three disks:
booting sr0a:/bsd: 6823756heap full (0x9fd98+16384)
Screenshot http://s14.postimg.org/3ty4m62lt/image.jpg
System resets

Four disks:
Black screen after BIOS
System resets



Re: Softraid Keydisk reboot loop

2015-12-26 Thread Alexander Hall
On Sat, Dec 26, 2015 at 10:41:34PM +0100, Thomas Bohl wrote:
> Hello,
> 
> I updated from 5.8-stabel to current today. (First just an update, than
> because of the problem a fresh installation.) On 5.8-stabel I had a
> working softraid boot setup with a USB-Stick as keydisk.
> 
> Now, if the keydisk is plugged in, the machine resets over and over
> again. Unfortunately there is noting shown on screen to present here.
> When the bootloader should show up there is just a beep sound (like when
> the machine is power on) and than the BIOS comes again.

I'd say it seems your system is trying to boot off the keydisk. Make sure
fdisk shows no flagged partition, or remote the flag by

fdisk:*1> flag 3
Partition 3 marked active.
fdisk:*1> flag 3 0
Partition 3 flag value set to 0x0.

By then, 'p' should show no partition with an asterisk before it.

/Alexander



Re: Softraid-Crypto: Installation not possible

2015-11-14 Thread Stefan Sperling
On Sun, Nov 15, 2015 at 12:22:09AM +0100, Stefan Wollny wrote:
> What is the problem? I have downloaded the 'install58.iso'-file
> (amd64-current) and burned the disk to start from. dmesg recognizes the
> three media and reports them as 'sd0' (=m.2-SSD), 'sd1' (SATA-SSD) and 'sd2'
> (USB-stick). I can start from the CD and hit 's' at the prompt.
> 
> < Transcription from here on >
> # fdisk -iy sd0
> Writing MBR at offset0
> # fdisk -iy sd1
> fdisk: sd2: No such file or directory
> # fdisk -iy sd2
> fdisk: sd2: No such file or directory
> < End of transcription >

It looks like you need to create device nodes:

 cd /dev
 sh MAKEDEV sd1
 sh MAKEDEV sd2

The install script creates them behind the scenes when
it asks questions about disks. By default only few device
nodes exist in the ramdisk.



Re: softraid(4)/bioctl(8) vs. non-512-byte sectors disks

2015-10-09 Thread Tobias Ulmer
On Thu, Oct 08, 2015 at 08:42:14AM -0400, Kenneth R Westerback wrote:
> ...

It works fine, I'm exercising 4K sr crypto with rsync every night.

Commit it pretty please :) The remaining bugs don't find themselves



Re: softraid(4)/bioctl(8) vs. non-512-byte sectors disks

2015-10-08 Thread Marcus MERIGHI
kwesterb...@gmail.com (Kenneth Westerback), 2014.03.19 (Wed) 17:09 (CET):
> Alas, softraid only supports 512 byte block devices at the moment.
>  Ken

Any news on this one? No answer as always means 'no'. 

I saw plus58.html:
* Use DEV_BSIZE instead of 512 where appropriate in the kernel. This
  starts laying the groundwork to allow disks with other sector sizes.


Just asking because some time has gone by and krw@ thought it was a
pitty [0]: 

[0] http://dictionary.reference.com/browse/alas
used as an exclamation to express sorrow, grief, pity, concern, or
apprehension of evil.

Thanks+Bye, Marcus

> On Mar 19, 2014 11:36 AM, "Marcus MERIGHI"  wrote:
> 
> > Reference:
> > ``Softraid 3TB Problems''
> > http://marc.info/?l=openbsd-misc=136225193931620
> >
> > Difference:
> > My HDDs show up as 4096 bytes/sector in dmesg.
> >
> > Short:
> > Are there any options for disks that come with 4096 bytes/sector to use
> > with softraid(4)/bioctl(8)?
> >
> > Long:
> >
> > So I got these lovely large disks:
> >
> > DMESG (full one at the end):
> >
> > umass4 at uhub5 port 4 configuration 1 interface 0 "Intenso USB 3.0
> >   Device" rev 2.10/1.00 addr 9
> > umass4: using SCSI over Bulk-Only
> > scsibus5 at umass4: 2 targets, initiator 0
> > sd5 at scsibus5 targ 1 lun 0:  SCSI4
> >   0/direct fixed serial.174c55aa22DF
> > sd5: 2861588MB, 4096 bytes/sector, 732566646 sectors
> > 
> > I suppose right above is my problem?
> >
> > FDISK:
> >
> > Disk: sd5   geometry: 45600/255/63 [732566646 4096-byte Sectors]
> > Offset: 0   Signature: 0xAA55
> > Starting Ending LBA Info:
> >  #: id  C   H   S -  C   H   S [   start:size ]
> >
> >
> -
> --
> >  0: 00  0   0   0 -  0   0   0 [   0:   0 ]
> >   unused
> >  1: 00  0   0   0 -  0   0   0 [   0:   0 ]
> >   unused
> >  2: 00  0   0   0 -  0   0   0 [   0:   0 ]
> >   unused
> > *3: A6  0   1   2 -  45599 254  63 [  64:   732563936 ]
> >   OpenBSD
> >
> > DISKLABEL:
> >
> > # /dev/rsd5c:
> > type: SCSI
> > disk: SCSI disk
> > label: whoknows
> > duid: 470974d3647801b8
> > flags:
> > bytes/sector: 4096
> > sectors/track: 63
> > tracks/cylinder: 255
> > sectors/cylinder: 16065
> > cylinders: 45600
> > total sectors: 732566646
> > boundstart: 64
> > boundend: 732564000
> > drivedata: 0
> >
> > 16 partitions:
> > #size   offset  fstype [fsize bsize  cpg]
> >   a:732563936   64RAID
> >   c:7325666460  unused
> >
> > BIOCTL output
> >
> > $ sudo bioctl -h -v -c C -l /dev/sd3a softraid0
> > softraid0: sd3a has unsupported sector size (4096)
> > softraid0: invalid metadata format
> >
> > Thanks in advance, Marcus
> >
> > DMESG FULL:
> > This is -current with a patch from brad@ to get the NICs (re) working.
> >
> > OpenBSD 5.5-current (GENERIC.MP) #3: Tue Mar 11 14:18:33 CET 2014
> > r...@fofo.fifi.at:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> > real mem = 4161052672 (3968MB)
> > avail mem = 4041580544 (3854MB)
> > mainbus0 at root
> > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb530 (73 entries)
> > bios0: vendor American Megatrends Inc. version "1.03" date 08/09/2013
> > bios0: Shuttle Inc. DS47D
> > acpi0 at bios0: rev 2
> > acpi0: sleep states S0 S3 S4 S5
> > acpi0: tables DSDT FACP APIC FPDT MCFG SLIC HPET SSDT SSDT SSDT
> > acpi0: wakeup devices P0P1(S4) USB1(S3) USB2(S3) USB3(S3) USB4(S3)
> > USB5(S3) USB6(S3) USB7(S3) PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4)
> > RP03(S4) PXSX(S4) RP04(S4) [...]
> > acpitimer0 at acpi0: 3579545 Hz, 24 bits
> > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> > cpu0 at mainbus0: apid 0 (boot processor)
> > cpu0: Intel(R) Celeron(R) CPU 847 @ 1.10GHz, 1097.67 MHz
> > cpu0:
> >
> >
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUS
> H,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX
> ,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,XSAVE
> ,NXE,LONG,LAHF,PERF,ITSC
> > cpu0: 256KB 64b/line 8-way L2 cache
> > cpu0: smt 0, core 0, package 0
> > mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
> > cpu0: apic clock running at 99MHz
> > cpu0: mwait min=64, max=64, C-substates=0.2.1.1.2, IBE
> > cpu1 at mainbus0: apid 2 (application processor)
> > cpu1: Intel(R) Celeron(R) CPU 847 @ 1.10GHz, 1097.51 MHz
> > cpu1:
> >
> >
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUS
> H,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX
> ,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,XSAVE
> ,NXE,LONG,LAHF,PERF,ITSC
> > cpu1: 256KB 64b/line 8-way L2 cache
> > cpu1: smt 0, core 1, package 0
> > ioapic0 at mainbus0: apid 2 pa 

Re: softraid(4)/bioctl(8) vs. non-512-byte sectors disks

2015-10-08 Thread Marcus MERIGHI
mcmer-open...@tor.at (Marcus MERIGHI), 2015.10.08 (Thu) 12:26 (CEST):
> kwesterb...@gmail.com (Kenneth Westerback), 2014.03.19 (Wed) 17:09 (CET):
> > Alas, softraid only supports 512 byte block devices at the moment.
> >  Ken
> 
> Any news on this one? No answer as always means 'no'. 

After reading the commit log for softraid I am pretty sure the answer
is 'no'. Therefore I have a) a patch for softraid(4) and b) another
question. 

Question: searching for large (>1TB) HDDs I found there's '512e' [1]. Is
this enough for softraid to work? The wikipedia article reads good, just
to make sure.

Index: softraid.4
===
RCS file: /cvs/src/share/man/man4/softraid.4,v
retrieving revision 1.41
diff -u -p -u -r1.41 softraid.4
--- softraid.4  14 Apr 2015 19:10:13 -  1.41
+++ softraid.4  8 Oct 2015 10:53:25 -
@@ -208,3 +208,5 @@ due to component failure.
 RAID is
 .Em not
 a substitute for good backup practices.
+.Pp
+Only disks with 512 bytes per sector are supported.

[1] https://en.wikipedia.org/wiki/Advanced_Format#512e

Bye, Marcus

> I saw plus58.html:
> * Use DEV_BSIZE instead of 512 where appropriate in the kernel. This
>   starts laying the groundwork to allow disks with other sector sizes.
> 
> 
> Just asking because some time has gone by and krw@ thought it was a
> pitty [0]: 
> 
> [0] http://dictionary.reference.com/browse/alas
> used as an exclamation to express sorrow, grief, pity, concern, or
> apprehension of evil.
> 
> Thanks+Bye, Marcus
> 
> > On Mar 19, 2014 11:36 AM, "Marcus MERIGHI"  wrote:
> > 
> > > Reference:
> > > ``Softraid 3TB Problems''
> > > http://marc.info/?l=openbsd-misc=136225193931620
> > >
> > > Difference:
> > > My HDDs show up as 4096 bytes/sector in dmesg.
> > >
> > > Short:
> > > Are there any options for disks that come with 4096 bytes/sector to use
> > > with softraid(4)/bioctl(8)?
> > >
> > > Long:
> > >
> > > So I got these lovely large disks:
> > >
> > > DMESG (full one at the end):
> > >
> > > umass4 at uhub5 port 4 configuration 1 interface 0 "Intenso USB 3.0
> > >   Device" rev 2.10/1.00 addr 9
> > > umass4: using SCSI over Bulk-Only
> > > scsibus5 at umass4: 2 targets, initiator 0
> > > sd5 at scsibus5 targ 1 lun 0:  SCSI4
> > >   0/direct fixed serial.174c55aa22DF
> > > sd5: 2861588MB, 4096 bytes/sector, 732566646 sectors
> > > 
> > > I suppose right above is my problem?
> > >
> > > FDISK:
> > >
> > > Disk: sd5   geometry: 45600/255/63 [732566646 4096-byte Sectors]
> > > Offset: 0   Signature: 0xAA55
> > > Starting Ending LBA Info:
> > >  #: id  C   H   S -  C   H   S [   start:size ]
> > >
> > >
> > -
> > --
> > >  0: 00  0   0   0 -  0   0   0 [   0:   0 ]
> > >   unused
> > >  1: 00  0   0   0 -  0   0   0 [   0:   0 ]
> > >   unused
> > >  2: 00  0   0   0 -  0   0   0 [   0:   0 ]
> > >   unused
> > > *3: A6  0   1   2 -  45599 254  63 [  64:   732563936 ]
> > >   OpenBSD
> > >
> > > DISKLABEL:
> > >
> > > # /dev/rsd5c:
> > > type: SCSI
> > > disk: SCSI disk
> > > label: whoknows
> > > duid: 470974d3647801b8
> > > flags:
> > > bytes/sector: 4096
> > > sectors/track: 63
> > > tracks/cylinder: 255
> > > sectors/cylinder: 16065
> > > cylinders: 45600
> > > total sectors: 732566646
> > > boundstart: 64
> > > boundend: 732564000
> > > drivedata: 0
> > >
> > > 16 partitions:
> > > #size   offset  fstype [fsize bsize  cpg]
> > >   a:732563936   64RAID
> > >   c:7325666460  unused
> > >
> > > BIOCTL output
> > >
> > > $ sudo bioctl -h -v -c C -l /dev/sd3a softraid0
> > > softraid0: sd3a has unsupported sector size (4096)
> > > softraid0: invalid metadata format
> > >
> > > Thanks in advance, Marcus
> > >
> > > DMESG FULL:
> > > This is -current with a patch from brad@ to get the NICs (re) working.
> > >
> > > OpenBSD 5.5-current (GENERIC.MP) #3: Tue Mar 11 14:18:33 CET 2014
> > > r...@fofo.fifi.at:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> > > real mem = 4161052672 (3968MB)
> > > avail mem = 4041580544 (3854MB)
> > > mainbus0 at root
> > > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb530 (73 entries)
> > > bios0: vendor American Megatrends Inc. version "1.03" date 08/09/2013
> > > bios0: Shuttle Inc. DS47D
> > > acpi0 at bios0: rev 2
> > > acpi0: sleep states S0 S3 S4 S5
> > > acpi0: tables DSDT FACP APIC FPDT MCFG SLIC HPET SSDT SSDT SSDT
> > > acpi0: wakeup devices P0P1(S4) USB1(S3) USB2(S3) USB3(S3) USB4(S3)
> > > USB5(S3) USB6(S3) USB7(S3) PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4)
> > > RP03(S4) PXSX(S4) RP04(S4) [...]
> > > acpitimer0 at acpi0: 3579545 Hz, 24 bits
> > > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> > > 

Re: softraid(4)/bioctl(8) vs. non-512-byte sectors disks

2015-10-08 Thread Kenneth R Westerback
On 10/08, Marcus MERIGHI wrote:
> kwesterb...@gmail.com (Kenneth Westerback), 2014.03.19 (Wed) 17:09 (CET):
> > Alas, softraid only supports 512 byte block devices at the moment.
> >  Ken
> 
> Any news on this one? No answer as always means 'no'. 
> 
> I saw plus58.html:
> * Use DEV_BSIZE instead of 512 where appropriate in the kernel. This
>   starts laying the groundwork to allow disks with other sector sizes.
> 
> 
> Just asking because some time has gone by and krw@ thought it was a
> pitty [0]: 
> 
> [0] http://dictionary.reference.com/browse/alas
> used as an exclamation to express sorrow, grief, pity, concern, or
> apprehension of evil.
> 
> Thanks+Bye, Marcus
> 

The 4K problem was 'solved' at c2k15 but unfortunately has not been
committed yet. The most likey diff is below. More expressions of
interest in getting it committed (especially if accompanied by test
reports) would help. Of course finding bugs would be good too!

 Ken

> > On Mar 19, 2014 11:36 AM, "Marcus MERIGHI"  wrote:
> > 
> > > Reference:
> > > ``Softraid 3TB Problems''
> > > http://marc.info/?l=openbsd-misc=136225193931620
> > >
> > > Difference:
> > > My HDDs show up as 4096 bytes/sector in dmesg.
> > >
> > > Short:
> > > Are there any options for disks that come with 4096 bytes/sector to use
> > > with softraid(4)/bioctl(8)?
> > >
> > > Long:
> > >
> > > So I got these lovely large disks:
> > >
> > > DMESG (full one at the end):
> > >
> > > umass4 at uhub5 port 4 configuration 1 interface 0 "Intenso USB 3.0
> > >   Device" rev 2.10/1.00 addr 9
> > > umass4: using SCSI over Bulk-Only
> > > scsibus5 at umass4: 2 targets, initiator 0
> > > sd5 at scsibus5 targ 1 lun 0:  SCSI4
> > >   0/direct fixed serial.174c55aa22DF
> > > sd5: 2861588MB, 4096 bytes/sector, 732566646 sectors
> > > 
> > > I suppose right above is my problem?
> > >
> > > FDISK:
> > >
> > > Disk: sd5   geometry: 45600/255/63 [732566646 4096-byte Sectors]
> > > Offset: 0   Signature: 0xAA55
> > > Starting Ending LBA Info:
> > >  #: id  C   H   S -  C   H   S [   start:size ]
> > >
> > >
> > -
> > --
> > >  0: 00  0   0   0 -  0   0   0 [   0:   0 ]
> > >   unused
> > >  1: 00  0   0   0 -  0   0   0 [   0:   0 ]
> > >   unused
> > >  2: 00  0   0   0 -  0   0   0 [   0:   0 ]
> > >   unused
> > > *3: A6  0   1   2 -  45599 254  63 [  64:   732563936 ]
> > >   OpenBSD
> > >
> > > DISKLABEL:
> > >
> > > # /dev/rsd5c:
> > > type: SCSI
> > > disk: SCSI disk
> > > label: whoknows
> > > duid: 470974d3647801b8
> > > flags:
> > > bytes/sector: 4096
> > > sectors/track: 63
> > > tracks/cylinder: 255
> > > sectors/cylinder: 16065
> > > cylinders: 45600
> > > total sectors: 732566646
> > > boundstart: 64
> > > boundend: 732564000
> > > drivedata: 0
> > >
> > > 16 partitions:
> > > #size   offset  fstype [fsize bsize  cpg]
> > >   a:732563936   64RAID
> > >   c:7325666460  unused
> > >
> > > BIOCTL output
> > >
> > > $ sudo bioctl -h -v -c C -l /dev/sd3a softraid0
> > > softraid0: sd3a has unsupported sector size (4096)
> > > softraid0: invalid metadata format
> > >
> > > Thanks in advance, Marcus
> > >

This is the diff that makes softraid of all kinds work on disks
with non-512-byte-sector disks. In fact it allows softraid volumes
to be constructed out of disks with different sector sizes. It will
present the constructed volume as having a sector size equal to the
largest sector of the devices used to contruct the softraid volume.

Test would be appreciated! As would ok's.

I manage to install a snap on a 4K device with an encrypted disk.
Couldn't boot it but if anybody has a BIOS that will boot a 4K drive
that would be an excellent test.

Index: softraid.c
===
RCS file: /cvs/src/sys/dev/softraid.c,v
retrieving revision 1.360
diff -u -p -r1.360 softraid.c
--- softraid.c  21 Jul 2015 03:30:51 -  1.360
+++ softraid.c  21 Jul 2015 04:02:14 -
@@ -944,6 +944,7 @@ sr_meta_validate(struct sr_discipline *s
 */
if (sm->ssd_data_blkno == 0)
sm->ssd_data_blkno = SR_META_V3_DATA_OFFSET;
+   sm->ssdi.ssd_secsize = DEV_BSIZE;
 
} else if (sm->ssdi.ssd_version == 4) {
 
@@ -953,14 +954,22 @@ sr_meta_validate(struct sr_discipline *s
 */
if (sm->ssd_data_blkno == 0)
sm->ssd_data_blkno = SR_DATA_OFFSET;
+   sm->ssdi.ssd_secsize = DEV_BSIZE;
 
-   } else if (sm->ssdi.ssd_version == SR_META_VERSION) {
+   } else if (sm->ssdi.ssd_version == 5) {
 
/*
 * Version 5 - variable 

Re: softraid(4)/bioctl(8) vs. non-512-byte sectors disks

2015-10-08 Thread Kenneth Westerback
On 8 October 2015 at 07:13, Marcus MERIGHI  wrote:
> mcmer-open...@tor.at (Marcus MERIGHI), 2015.10.08 (Thu) 12:26 (CEST):
>> kwesterb...@gmail.com (Kenneth Westerback), 2014.03.19 (Wed) 17:09 (CET):
>> > Alas, softraid only supports 512 byte block devices at the moment.
>> >  Ken
>>
>> Any news on this one? No answer as always means 'no'.
>
> After reading the commit log for softraid I am pretty sure the answer
> is 'no'. Therefore I have a) a patch for softraid(4) and b) another
> question.
>
> Question: searching for large (>1TB) HDDs I found there's '512e' [1]. Is
> this enough for softraid to work? The wikipedia article reads good, just
> to make sure.

Yes, disks that are 4K internally but present as 512-byte devices work fine.

 Ken

>
> Index: softraid.4
> ===
> RCS file: /cvs/src/share/man/man4/softraid.4,v
> retrieving revision 1.41
> diff -u -p -u -r1.41 softraid.4
> --- softraid.4  14 Apr 2015 19:10:13 -  1.41
> +++ softraid.4  8 Oct 2015 10:53:25 -
> @@ -208,3 +208,5 @@ due to component failure.
>  RAID is
>  .Em not
>  a substitute for good backup practices.
> +.Pp
> +Only disks with 512 bytes per sector are supported.
>
> [1] https://en.wikipedia.org/wiki/Advanced_Format#512e
>
> Bye, Marcus
>
>> I saw plus58.html:
>> * Use DEV_BSIZE instead of 512 where appropriate in the kernel. This
>>   starts laying the groundwork to allow disks with other sector sizes.
>>
>>
>> Just asking because some time has gone by and krw@ thought it was a
>> pitty [0]:
>>
>> [0] http://dictionary.reference.com/browse/alas
>> used as an exclamation to express sorrow, grief, pity, concern, or
>> apprehension of evil.
>>
>> Thanks+Bye, Marcus
>>
>> > On Mar 19, 2014 11:36 AM, "Marcus MERIGHI"  wrote:
>> >
>> > > Reference:
>> > > ``Softraid 3TB Problems''
>> > > http://marc.info/?l=openbsd-misc=136225193931620
>> > >
>> > > Difference:
>> > > My HDDs show up as 4096 bytes/sector in dmesg.
>> > >
>> > > Short:
>> > > Are there any options for disks that come with 4096 bytes/sector to use
>> > > with softraid(4)/bioctl(8)?
>> > >
>> > > Long:
>> > >
>> > > So I got these lovely large disks:
>> > >
>> > > DMESG (full one at the end):
>> > >
>> > > umass4 at uhub5 port 4 configuration 1 interface 0 "Intenso USB 3.0
>> > >   Device" rev 2.10/1.00 addr 9
>> > > umass4: using SCSI over Bulk-Only
>> > > scsibus5 at umass4: 2 targets, initiator 0
>> > > sd5 at scsibus5 targ 1 lun 0:  SCSI4
>> > >   0/direct fixed serial.174c55aa22DF
>> > > sd5: 2861588MB, 4096 bytes/sector, 732566646 sectors
>> > > 
>> > > I suppose right above is my problem?
>> > >
>> > > FDISK:
>> > >
>> > > Disk: sd5   geometry: 45600/255/63 [732566646 4096-byte Sectors]
>> > > Offset: 0   Signature: 0xAA55
>> > > Starting Ending LBA Info:
>> > >  #: id  C   H   S -  C   H   S [   start:size ]
>> > >
>> > >
>> > -
>> > --
>> > >  0: 00  0   0   0 -  0   0   0 [   0:   0 ]
>> > >   unused
>> > >  1: 00  0   0   0 -  0   0   0 [   0:   0 ]
>> > >   unused
>> > >  2: 00  0   0   0 -  0   0   0 [   0:   0 ]
>> > >   unused
>> > > *3: A6  0   1   2 -  45599 254  63 [  64:   732563936 ]
>> > >   OpenBSD
>> > >
>> > > DISKLABEL:
>> > >
>> > > # /dev/rsd5c:
>> > > type: SCSI
>> > > disk: SCSI disk
>> > > label: whoknows
>> > > duid: 470974d3647801b8
>> > > flags:
>> > > bytes/sector: 4096
>> > > sectors/track: 63
>> > > tracks/cylinder: 255
>> > > sectors/cylinder: 16065
>> > > cylinders: 45600
>> > > total sectors: 732566646
>> > > boundstart: 64
>> > > boundend: 732564000
>> > > drivedata: 0
>> > >
>> > > 16 partitions:
>> > > #size   offset  fstype [fsize bsize  cpg]
>> > >   a:732563936   64RAID
>> > >   c:7325666460  unused
>> > >
>> > > BIOCTL output
>> > >
>> > > $ sudo bioctl -h -v -c C -l /dev/sd3a softraid0
>> > > softraid0: sd3a has unsupported sector size (4096)
>> > > softraid0: invalid metadata format
>> > >
>> > > Thanks in advance, Marcus
>> > >
>> > > DMESG FULL:
>> > > This is -current with a patch from brad@ to get the NICs (re) working.
>> > >
>> > > OpenBSD 5.5-current (GENERIC.MP) #3: Tue Mar 11 14:18:33 CET 2014
>> > > r...@fofo.fifi.at:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>> > > real mem = 4161052672 (3968MB)
>> > > avail mem = 4041580544 (3854MB)
>> > > mainbus0 at root
>> > > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb530 (73 entries)
>> > > bios0: vendor American Megatrends Inc. version "1.03" date 08/09/2013
>> > > bios0: Shuttle Inc. DS47D
>> > > acpi0 at bios0: rev 2
>> > > acpi0: sleep states S0 S3 S4 S5
>> > > acpi0: tables DSDT FACP APIC FPDT MCFG SLIC HPET 

Re: Softraid crypto - howto mount partitions from multiple devices at boot?

2015-07-17 Thread Maurice McCarthy
On Thu, Jul 16, 2015 at 11:17:16PM +0200 or thereabouts, Jan Vlach wrote:
 Hello misc,
 
 I have a small netbook with two flash devices - 32G and 4G. I'm using
 softraid crypto discipline with passphrase on the 32G one. That works
 fine.
 
 I would also like to use softraid crypto on the second (4G) device and
 have it also mounted at boot.
 
 How to achieve this? What is current best practice? Can the passphrase
 be shared between the devices or should a key device be used? 
 
 I've checked the manual pages for softraid and bioctl, but I don't
 understand what is supposed to be a keydisk (partition? device holding a
 binary file to be used as a key? )
 
 Thank you for clues
 
 Jan
 

Isthis any help?
http://undeadly.org/cgi?action=articlesid=20131112031806
 M



Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Jan Stary
On Jun 11 19:47:43, nothingn...@citycable.ch wrote:
 Hi misc@
 
   I've got a couple of softraid 1 volumes on a server and the /home one was
 filling up a bit too much so I had to delete a bunch of isos and other non
 necessary items. I did this yesterday and it still hasn't cleared the disk
 space completely yet:
 
 # bioctl -ih sd2
 Volume  Status   Size Device
 softraid0 0 Online   910G sd2 RAID1
   0 Online   910G 0:0.0   noencl sd0d
   1 Online   910G 0:1.0   noencl sd1d
 
 # df -kh
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/sd3a 19.7G5.6G   13.0G30%/
 /dev/sd2a  906G859G1.2G   100%/home
 
 /home # du -sh
 782G.
 
 So there's a rather large disparity there!

The numbers reported by df(1) and du(1) mean different things.
As an extreme example, create a huge empty filesystem.
The du(1) size will be zero of course, but df(1) will
show you Avail noticably smaller than Size.

How was sd2a created (newfs) and how is it mounted (mount -v)?

I don't think you have a softraid problem.



Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Jan Stary
 # df -kh
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/sd2a  906G859G1.2G   100%/home

$ cd
$ sync ; sync ; sync ; df -h
$ dd if=/dev/zero of=file bs=1m count=1024
$ du -h file
$ sync ; sync ; sync ; df -h

Did the Avail space drop by 1G?

$ rm -f file
$ sync ; sync ; sync ; df -h

Did the Avail space rise by 1G again?



Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Joel Sing
On Saturday 13 June 2015, Joel Sing wrote:
 On Friday 12 June 2015, Noth wrote:
  Hi misc@
 
 I've got a couple of softraid 1 volumes on a server and the /home one
  was filling up a bit too much so I had to delete a bunch of isos and
  other non necessary items. I did this yesterday and it still hasn't
  cleared the disk space completely yet:
 
  # bioctl -ih sd2
  Volume  Status   Size Device
  softraid0 0 Online   910G sd2 RAID1
 0 Online   910G 0:0.0   noencl sd0d
 1 Online   910G 0:1.0   noencl sd1d
 
  # df -kh
  Filesystem SizeUsed   Avail Capacity  Mounted on
  /dev/sd3a 19.7G5.6G   13.0G30%/
  /dev/sd2a  906G859G1.2G   100%/home
 
  /home # du -sh
  782G.
 
  So there's a rather large disparity there! I've tried issuing sync
  commands a few times over the past 24 hours but to no avail :

 This has nothing to do with softraid - softraid is just a virtual HBA that
 provides a SCSI block device, which you've then put a (presumably FFS) file
 system on top of.

Okay, I'll soften this slightly - if you are actually using softdeps you may 
be encountering an issue with softraid that is slowing/delaying the 
background processing... I'll see if I can reproduce it here once you 
confirm.

  Is there any solution to this apart from waiting for days on end?

 You did not actually give the output from mount(8), however I'm going to
 guess - stop using softdeps?
-- 

Action without study is fatal. Study without action is futile.
-- Mary Ritter Beard



Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Noth

On 12/06/15 18:11, Joel Sing wrote:

On Saturday 13 June 2015, Joel Sing wrote:

On Friday 12 June 2015, Noth wrote:

Hi misc@

I've got a couple of softraid 1 volumes on a server and the /home one
was filling up a bit too much so I had to delete a bunch of isos and
other non necessary items. I did this yesterday and it still hasn't
cleared the disk space completely yet:

# bioctl -ih sd2
Volume  Status   Size Device
softraid0 0 Online   910G sd2 RAID1
0 Online   910G 0:0.0   noencl sd0d
1 Online   910G 0:1.0   noencl sd1d

# df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home

/home # du -sh
782G.

So there's a rather large disparity there! I've tried issuing sync
commands a few times over the past 24 hours but to no avail :

This has nothing to do with softraid - softraid is just a virtual HBA that
provides a SCSI block device, which you've then put a (presumably FFS) file
system on top of.

Okay, I'll soften this slightly - if you are actually using softdeps you may
be encountering an issue with softraid that is slowing/delaying the
background processing... I'll see if I can reproduce it here once you
confirm.


Is there any solution to this apart from waiting for days on end?

You did not actually give the output from mount(8), however I'm going to
guess - stop using softdeps?


That mail didn't make it through for some reason. Here's the mount -v :

/dev/sd2a on /home type ffs (rw, NFS exported, local, nodev, nosuid, 
softdep, ctime=Thu Jun 11 23:51:13 2015)


Also this partition is shared via NFS, SMBFS  AFP . I tried last night 
turning all of those off and issueing sync multiples times to no avail. 
I ended up rebooting but that's not a workable solution. Let me try 
remounting without softdeps and running the 1G file test again (that did 
clear... after 20 mins).




Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Noth

On 12/06/15 14:57, Jan Stary wrote:

On Jun 11 19:47:43, nothingn...@citycable.ch wrote:

Hi misc@

   I've got a couple of softraid 1 volumes on a server and the /home one was
filling up a bit too much so I had to delete a bunch of isos and other non
necessary items. I did this yesterday and it still hasn't cleared the disk
space completely yet:

# bioctl -ih sd2
Volume  Status   Size Device
softraid0 0 Online   910G sd2 RAID1
   0 Online   910G 0:0.0   noencl sd0d
   1 Online   910G 0:1.0   noencl sd1d

# df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home

/home # du -sh
782G.

So there's a rather large disparity there!

The numbers reported by df(1) and du(1) mean different things.
As an extreme example, create a huge empty filesystem.
The du(1) size will be zero of course, but df(1) will
show you Avail noticably smaller than Size.

How was sd2a created (newfs) and how is it mounted (mount -v)?

I don't think you have a softraid problem.



/dev/sd2a on /home type ffs (rw, NFS exported, local, nodev, nosuid, 
softdep, ctime=Thu Jun 11 23:51:13 2015)


newfs formated via the installer... this was back around 5.0 or 5.1 so 
I'm not 100% sure of the options used, if any.




Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Noth

[Fri Jun 12 16:59:18] homeuser@casper: ~ $ sync ; sync ; sync ; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G819G   41.6G95%/home
[Fri Jun 12 16:59:23] homeuser@casper: ~ $ dd if=/dev/zero of=file bs=1m 
count=1024

1024+0 records in
1024+0 records out
1073741824 bytes transferred in 19.375 secs (55416456 bytes/sec)
[Fri Jun 12 16:59:58] homeuser@casper: ~ $ du -h file
1.0Gfile
[Fri Jun 12 17:09:53] homeuser@casper: ~ $ sync ; sync ; sync ; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G820G   40.6G95%/home
[Fri Jun 12 17:10:07] homeuser@casper: ~ $ rm -f file
[Fri Jun 12 17:10:42] homeuser@casper: ~ $ sync ; sync ; sync ; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G820G   40.6G95%/home


On 12/06/15 15:02, Jan Stary wrote:

rm -f file




Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Joel Sing
On Friday 12 June 2015, Noth wrote:
 Hi misc@

I've got a couple of softraid 1 volumes on a server and the /home one
 was filling up a bit too much so I had to delete a bunch of isos and
 other non necessary items. I did this yesterday and it still hasn't
 cleared the disk space completely yet:

 # bioctl -ih sd2
 Volume  Status   Size Device
 softraid0 0 Online   910G sd2 RAID1
0 Online   910G 0:0.0   noencl sd0d
1 Online   910G 0:1.0   noencl sd1d

 # df -kh
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/sd3a 19.7G5.6G   13.0G30%/
 /dev/sd2a  906G859G1.2G   100%/home

 /home # du -sh
 782G.

 So there's a rather large disparity there! I've tried issuing sync
 commands a few times over the past 24 hours but to no avail :

This has nothing to do with softraid - softraid is just a virtual HBA that 
provides a SCSI block device, which you've then put a (presumably FFS) file 
system on top of.

 Is there any solution to this apart from waiting for days on end?

You did not actually give the output from mount(8), however I'm going to 
guess - stop using softdeps?
-- 

Action without study is fatal. Study without action is futile.
-- Mary Ritter Beard



Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-12 Thread Noth

On 12/06/15 18:15, Noth wrote:

On 12/06/15 18:11, Joel Sing wrote:

On Saturday 13 June 2015, Joel Sing wrote:

On Friday 12 June 2015, Noth wrote:

Hi misc@

I've got a couple of softraid 1 volumes on a server and the 
/home one

was filling up a bit too much so I had to delete a bunch of isos and
other non necessary items. I did this yesterday and it still hasn't
cleared the disk space completely yet:

# bioctl -ih sd2
Volume  Status   Size Device
softraid0 0 Online   910G sd2 RAID1
0 Online   910G 0:0.0   noencl sd0d
1 Online   910G 0:1.0   noencl sd1d

# df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home

/home # du -sh
782G.

So there's a rather large disparity there! I've tried issuing sync
commands a few times over the past 24 hours but to no avail :
This has nothing to do with softraid - softraid is just a virtual 
HBA that
provides a SCSI block device, which you've then put a (presumably 
FFS) file

system on top of.
Okay, I'll soften this slightly - if you are actually using softdeps 
you may

be encountering an issue with softraid that is slowing/delaying the
background processing... I'll see if I can reproduce it here once you
confirm.


Is there any solution to this apart from waiting for days on end?
You did not actually give the output from mount(8), however I'm 
going to

guess - stop using softdeps?


That mail didn't make it through for some reason. Here's the mount -v :

/dev/sd2a on /home type ffs (rw, NFS exported, local, nodev, nosuid, 
softdep, ctime=Thu Jun 11 23:51:13 2015)


Also this partition is shared via NFS, SMBFS  AFP . I tried last 
night turning all of those off and issueing sync multiples times to no 
avail. I ended up rebooting but that's not a workable solution. Let me 
try remounting without softdeps and running the 1G file test again 
(that did clear... after 20 mins).




With softdeps turned off:

[Fri Jun 12 18:32:09] homeuser@casper: ~ $ sync ; sync ; sync ; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G824G   36.4G96%/home
[Fri Jun 12 18:32:14] homeuser@casper: ~ $ dd if=/dev/zero of=file bs=1m 
count=1024

1024+0 records in
1024+0 records out
1073741824 bytes transferred in 19.521 secs (55003901 bytes/sec)
[Fri Jun 12 18:32:41] homeuser@casper: ~ $ sync ; sync ; sync ; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G825G   35.4G96%/home
[Fri Jun 12 18:32:50] homeuser@casper: ~ $ rm -f file
[Fri Jun 12 18:32:53] homeuser@casper: ~ $ sync ; sync ; sync ; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G824G   36.4G96%/home

Works a lot better! Double tested on the other softraid partition with 
softdeps:


[Fri Jun 12 18:31:45] root@casper: ~ # sync; sync; sync; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G824G   36.4G96%/home
[Fri Jun 12 18:34:47] root@casper: ~ # dd if=/dev/zero of=file bs=1m 
count=1024

1024+0 records in
1024+0 records out
1073741824 bytes transferred in 17.325 secs (61973836 bytes/sec)
[Fri Jun 12 18:35:26] root@casper: ~ # sync; sync; sync; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G6.6G   12.1G35%/
/dev/sd2a  906G824G   36.4G96%/home
[Fri Jun 12 18:35:36] root@casper: ~ # rm file
[Fri Jun 12 18:35:38] root@casper: ~ # sync; sync; sync; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G6.6G   12.1G35%/
/dev/sd2a  906G824G   36.4G96%/home
[Fri Jun 12 18:37:01] root@casper: ~ # sync; sync; sync; df -h
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.1G30%/
/dev/sd2a  906G824G   36.4G96%/home

Could the superslow update of available space have something to do with 
the partition hitting 100% fill?




Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-11 Thread Alexander Hall
On June 11, 2015 7:47:43 PM GMT+02:00, Noth nothingn...@citycable.ch wrote:
Hi misc@

 I've got a couple of softraid 1 volumes on a server and the /home one 
was filling up a bit too much so I had to delete a bunch of isos and 
other non necessary items. I did this yesterday and it still hasn't 
cleared the disk space completely yet:

This doesn't sound like a softraid problem. Is some other process holding those 
files open? Does fstat  -f /home give a clue?

/Alexander 


# bioctl -ih sd2
Volume  Status   Size Device
softraid0 0 Online   910G sd2 RAID1
   0 Online   910G 0:0.0   noencl sd0d
   1 Online   910G 0:1.0   noencl sd1d

# df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home

/home # du -sh
782G.

So there's a rather large disparity there! I've tried issuing sync 
commands a few times over the past 24 hours but to no avail :

[Thu Jun 11 19:32:03] root@casper: /home # df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home
[Thu Jun 11 19:32:04] root@casper: /home # sync
[Thu Jun 11 19:35:26] root@casper: /home # df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home

dmesg:
OpenBSD 5.7 (GENERIC.MP) #0: Tue May  5 20:04:33 CEST 2015
r...@openbsd64.nineinchnetworks.ch:/root/binpatchng-2.1.2/work-binpatch57-amd64/src/sys/arch/amd64/compile/GENERIC.MP
RTC BIOS diagnostic error 80clock_battery
real mem = 4260089856 (4062MB)
avail mem = 4142768128 (3950MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xceebd000 (28 entries)
bios0: vendor Intel Corp. version CCCDT10N.86A.0032.2012.0323.1510 
date 03/23/2012
bios0: Intel Corporation D2500CC
acpi0 at bios0: rev 2
acpi0: sleep states S0 S3 S4 S5
acpi0: tables DSDT FACP SSDT APIC MCFG HPET
acpi0: wakeup devices SLT1(S4) PS2M(S4) PS2K(S4) UAR1(S3) UAR2(S3) 
UAR3(S4) UAR4(S4) USB0(S3) USB1(S3) USB2(S3) USB3(S3) USB7(S3) PXSX(S4)

RP01(S4) PXSX(S4) RP02(S4) [...]
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Atom(TM) CPU D2500 @ 1.86GHz, 1867.05 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,TM2,SSSE3,CX16,xTPR,PDCM,MOVBE,NXE,LONG,LAHF,PERF,ITSC
cpu0: 512KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 7 var ranges, 88 fixed ranges
cpu0: apic clock running at 133MHz
cpu0: mwait min=64, max=64, C-substates=0.1.0.0.0, IBE
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Intel(R) Atom(TM) CPU D2500 @ 1.86GHz, 1866.73 MHz
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,TM2,SSSE3,CX16,xTPR,PDCM,MOVBE,NXE,LONG,LAHF,PERF,ITSC
cpu1: 512KB 64b/line 8-way L2 cache
cpu1: smt 0, core 1, package 0
ioapic0 at mainbus0: apid 8 pa 0xfec0, version 20, 24 pins
ioapic0: misconfigured as apic 0, remapped to apid 8
acpimcfg0 at acpi0 addr 0xe000, bus 0-63
acpihpet0 at acpi0: 14318179 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus 3 (P0P1)
acpiprt2 at acpi0: bus 2 (RP01)
acpiprt3 at acpi0: bus 1 (RP02)
acpiprt4 at acpi0: bus -1 (RP03)
acpiprt5 at acpi0: bus -1 (RP04)
acpicpu0 at acpi0
acpicpu1 at acpi0
acpibtn0 at acpi0: PWRB
acpibtn1 at acpi0: SLPB
acpivideo0 at acpi0: GFX0
acpivout0 at acpivideo0: DD02
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 Intel Atom D2000/N2000 Host rev 0x03
vga1 at pci0 dev 2 function 0 Intel Atom D2000/N2000 Video rev 0x09
intagp at vga1 not configured
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
azalia0 at pci0 dev 27 function 0 Intel 82801GB HD Audio rev 0x02:
msi
azalia0: codecs: Realtek ALC888
audio0 at azalia0
ppb0 at pci0 dev 28 function 0 Intel 82801GB PCIE rev 0x02: msi
pci1 at ppb0 bus 2
em0 at pci1 dev 0 function 0 Intel 82574L rev 0x00: msi, address 
00:22:4d:88:5c:4f
ppb1 at pci0 dev 28 function 1 Intel 82801GB PCIE rev 0x02: msi
pci2 at ppb1 bus 1
em1 at pci2 dev 0 function 0 Intel 82574L rev 0x00: msi, address 
00:22:4d:88:5c:52
uhci0 at pci0 dev 29 function 0 Intel 82801GB USB rev 0x02: apic 8
int 23
uhci1 at pci0 dev 29 function 1 Intel 82801GB USB rev 0x02: apic 8
int 19
uhci2 at pci0 dev 29 function 2 Intel 82801GB USB rev 0x02: apic 8
int 18
uhci3 at pci0 dev 29 function 3 Intel 82801GB USB rev 0x02: apic 8
int 16
ehci0 at pci0 dev 29 function 7 Intel 82801GB USB rev 0x02: apic 8
int 23
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 

Re: Softraid 1 takes forever to declare disk space free after delete

2015-06-11 Thread Noth

No clue comes from it...

# fstat  -f /home
USER CMD  PID   FD MOUNTINUM MODE   R/W SZ|DV
root fstat  30300   wd /home   2 drwxr-xr-x   r 512
homeuser  imap   29865   wd /home12595200 drwxr-xr-x   r 3584
homeuser  imap   32331   wd /home12595200 drwxr-xr-x   r 3584
homeuser  imap   21323   wd /home12595200 drwxr-xr-x   r 3584
homeuser  screen 14447   wd /home12595200 drwxr-xr-x   r 3584
homeuser  ksh 9939   wd /home12595200 drwxr-xr-x   r 3584
homeuser  ksh 9939   11 /home12595654 -rw-r--r--  rw 13169
nobody   openvpn105443 /home   3 -rw---   w 232
nobody   openvpn 55373 /home   3 -rw---   w 232
botusereggdrop-1.6.21 24013 text /home11232325 -rwxr-xr-x r 2388189
botusereggdrop-1.6.21 24013   wd /home11230720 drwxr-xr-x r 
1024
botusereggdrop-1.6.21 240135 /home11230916 -rw-r--r-- 
rw0
botusereggdrop-1.6.21 240137 /home11230910 -rw-r--r-- 
rw0
botusereggdrop-1.6.21 240138 /home11232810 -rw-r--r-- 
rw  898
botusereggdrop-1.6.21 240139 /home11232816 -rw-r--r-- rw 
3435

homeuser  irssi  11827   wd /home12595200 drwxr-xr-x   r 3584
homeuser  irssi  11827   10 /home12596270 -rw---   w 4251372
homeuser  irssi  11827   13 /home12595263 -rw---   w 2936014
homeuser  irssi  11827   14 /home12595285 -rw---   w 5120541
homeuser  irssi  11827   16 /home12595286 -rw---   w 70090812
homeuser  irssi  11827   19 /home12595304 -rw---   w 21572482
homeuser  irssi  11827   21 /home12595301 -rw---   w 32415199
homeuser  ksh10774   wd /home12595200 drwxr-xr-x   r 3584
homeuser  ksh10774   12 /home12595654 -rw-r--r--  rw 13169
homeuser  screen  4623   wd /home12595200 drwxr-xr-x   r 3584
root ksh25183   wd /home   2 drwxr-xr-x   r 512

On 11/06/15 21:55, Alexander Hall wrote:

On June 11, 2015 7:47:43 PM GMT+02:00, Noth nothingn...@citycable.ch wrote:

Hi misc@

I've got a couple of softraid 1 volumes on a server and the /home one
was filling up a bit too much so I had to delete a bunch of isos and
other non necessary items. I did this yesterday and it still hasn't
cleared the disk space completely yet:

This doesn't sound like a softraid problem. Is some other process holding those 
files open? Does fstat  -f /home give a clue?

/Alexander


# bioctl -ih sd2
Volume  Status   Size Device
softraid0 0 Online   910G sd2 RAID1
   0 Online   910G 0:0.0   noencl sd0d
   1 Online   910G 0:1.0   noencl sd1d

# df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home

/home # du -sh
782G.

So there's a rather large disparity there! I've tried issuing sync
commands a few times over the past 24 hours but to no avail :

[Thu Jun 11 19:32:03] root@casper: /home # df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home
[Thu Jun 11 19:32:04] root@casper: /home # sync
[Thu Jun 11 19:35:26] root@casper: /home # df -kh
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/sd3a 19.7G5.6G   13.0G30%/
/dev/sd2a  906G859G1.2G   100%/home

dmesg:
OpenBSD 5.7 (GENERIC.MP) #0: Tue May  5 20:04:33 CEST 2015
r...@openbsd64.nineinchnetworks.ch:/root/binpatchng-2.1.2/work-binpatch57-amd64/src/sys/arch/amd64/compile/GENERIC.MP
RTC BIOS diagnostic error 80clock_battery
real mem = 4260089856 (4062MB)
avail mem = 4142768128 (3950MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xceebd000 (28 entries)
bios0: vendor Intel Corp. version CCCDT10N.86A.0032.2012.0323.1510
date 03/23/2012
bios0: Intel Corporation D2500CC
acpi0 at bios0: rev 2
acpi0: sleep states S0 S3 S4 S5
acpi0: tables DSDT FACP SSDT APIC MCFG HPET
acpi0: wakeup devices SLT1(S4) PS2M(S4) PS2K(S4) UAR1(S3) UAR2(S3)
UAR3(S4) UAR4(S4) USB0(S3) USB1(S3) USB2(S3) USB3(S3) USB7(S3) PXSX(S4)

RP01(S4) PXSX(S4) RP02(S4) [...]
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Atom(TM) CPU D2500 @ 1.86GHz, 1867.05 MHz
cpu0:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,TM2,SSSE3,CX16,xTPR,PDCM,MOVBE,NXE,LONG,LAHF,PERF,ITSC
cpu0: 512KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 7 var ranges, 88 fixed ranges
cpu0: apic clock running at 133MHz
cpu0: mwait min=64, max=64, C-substates=0.1.0.0.0, IBE
cpu1 at mainbus0: apid 1 (application 

Re: Softraid question

2015-05-31 Thread Stefan Sperling
On Sat, May 30, 2015 at 08:07:07PM -0600, Duncan Patton a Campbell wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 
 So now I'm in recovery mode and it is doing about 1% per hour
 (it's a 2Tb raid1). Is this normal

That's normal. Just let it run until it's done. You can reboot
the machine if you like, recovery will continue where it left off.

 and can it be speeded up from userland?

No.



Re: Softraid question

2015-05-31 Thread Duncan Patton a Campbell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Sun, 31 May 2015 10:20:25 +0200
Stefan Sperling s...@stsp.name wrote:

 On Sat, May 30, 2015 at 08:07:07PM -0600, Duncan Patton a Campbell
 wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA256
  
  
  So now I'm in recovery mode and it is doing about 1% per hour
  (it's a 2Tb raid1). Is this normal
 
 That's normal. Just let it run until it's done. You can reboot
 the machine if you like, recovery will continue where it left off.
 

Good to know ;)

Thanks

  and can it be speeded up from userland?
 
 No.
 
 
 


- -- 

https://babayaga.neotext.ca/PublicKeys/Duncan_Patton_a_Campbell_pubkey.txt

Ne obliviscaris, vix ea nostra voco.
iF4EAREIAAYFAlVqzSQACgkQiY6AzzR1lzwrCQD+MlSwy2DTAGH0853JkuFVfHe1
X3oaMoqVumXIOp0tgzcA/1wsA1CHF6B3gX/Hy5y/SOCeYARH6y/jzakbzOsynqf2
=wiNr
-END PGP SIGNATURE-



Re: Softraid question

2015-05-30 Thread Duncan Patton a Campbell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


So now I'm in recovery mode and it is doing about 1% per hour
(it's a 2Tb raid1).  Is this normal and can it be speeded
up from userland?

Thanks,

Dhu

On Thu, 21 May 2015 04:24:22 -0400
Jiri B ji...@devio.us wrote:

   bioctl sd3 ?
   
   j.
   
  # bioctl sd3 
  Volume  Status   Size Device  
  softraid0 0 Degraded2000396018176 sd3 RAID1 
0 Offline 0 0:0.0   noencl 
1 Online  2000396018176 0:1.0   noencl sd1a
 
 
 So you got the answer, full dmesg would show you more...
 
 http://www.openbsd.org/faq/faq14.html#softraid
 
 j.
 


- -- 

https://babayaga.neotext.ca/PublicKeys/Duncan_Patton_a_Campbell_pubkey.txt

Ne obliviscaris, vix ea nostra voco.
iF4EAREIAAYFAlVqbMsACgkQiY6AzzR1lzzE1gD+Kg3FR0TP0ltEAyFLeTSKfVIS
7K+32qduAWRs0gONcDcA/2CLh0Iq5AURbE1at8RSAfy+l1TV6C4GBj3aEj2r4z9R
=rZXw
-END PGP SIGNATURE-



Re: Softraid question

2015-05-21 Thread Duncan Patton a Campbell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Thu, 21 May 2015 03:14:24 -0400
Jiri B ji...@devio.us wrote:

 On Wed, May 20, 2015 at 09:58:30PM -0600, Duncan Patton a Campbell
 wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA256
  
  
  I appear to have a disk failure of some kind.
  
  softraid0 at root
  scsibus4 at softraid0: 256 targets
  softraid0: not all chunks were provided; attempting to bring volume
  0 online softraid0: trying to bring up sd3 degraded
  sd3 at scsibus4 targ 1 lun 0: OPENBSD, SR RAID 1, 005 SCSI2
  0/direct fixed sd3: 1907726MB, 512 bytes/sector, 3907023473 sectors
  
  
  Which mebbe is ok, but how to determine which physical disk is
  having problems (if it is) and what to do about it?
  
  Any pointers or info would be helpful... (yes, I am reading the man
  pages, but they talk about whole disk failures, not sector/block
  ones etc...
 
 bioctl sd3 ?
 
 j.
 
# bioctl sd3 
Volume  Status   Size Device  
softraid0 0 Degraded2000396018176 sd3 RAID1 
  0 Offline 0 0:0.0   noencl 
  1 Online  2000396018176 0:1.0   noencl sd1a

Dhu

- -- 

https://babayaga.neotext.ca/PublicKeys/Duncan_Patton_a_Campbell_pubkey.txt

Ne obliviscaris, vix ea nostra voco.
iF4EAREIAAYFAlVdkEsACgkQiY6AzzR1lzyingD8D7VRdyQO3MCi9raz/8FdU3LO
kE2VqeiuDudUdWqL9vIBAKd7FXhvZVfDcgUxYHNjsXWQA/Xk2qVb8cTJGgoQ6aIe
=scc+
-END PGP SIGNATURE-



Re: Softraid question

2015-05-21 Thread Jiri B
  bioctl sd3 ?
  
  j.
  
 # bioctl sd3 
 Volume  Status   Size Device  
 softraid0 0 Degraded2000396018176 sd3 RAID1 
   0 Offline 0 0:0.0   noencl 
   1 Online  2000396018176 0:1.0   noencl sd1a


So you got the answer, full dmesg would show you more...

http://www.openbsd.org/faq/faq14.html#softraid

j.



Re: Softraid question

2015-05-21 Thread Jiri B
On Wed, May 20, 2015 at 09:58:30PM -0600, Duncan Patton a Campbell wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
 
 
 I appear to have a disk failure of some kind.
 
 softraid0 at root
 scsibus4 at softraid0: 256 targets
 softraid0: not all chunks were provided; attempting to bring volume 0 online
 softraid0: trying to bring up sd3 degraded
 sd3 at scsibus4 targ 1 lun 0: OPENBSD, SR RAID 1, 005 SCSI2 0/direct fixed
 sd3: 1907726MB, 512 bytes/sector, 3907023473 sectors
 
 
 Which mebbe is ok, but how to determine which physical disk is having problems
 (if it is) and what to do about it?
 
 Any pointers or info would be helpful... (yes, I am reading the man pages,
 but they talk about whole disk failures, not sector/block ones etc...

bioctl sd3 ?

j.



Re: Softraid question

2015-05-21 Thread Duncan Patton a Campbell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Thu, 21 May 2015 04:24:22 -0400
Jiri B ji...@devio.us wrote:

   bioctl sd3 ?
   
   j.
   
  # bioctl sd3 
  Volume  Status   Size Device  
  softraid0 0 Degraded2000396018176 sd3 RAID1 
0 Offline 0 0:0.0   noencl 
1 Online  2000396018176 0:1.0   noencl sd1a
 
 
 So you got the answer, full dmesg would show you more...
 
 http://www.openbsd.org/faq/faq14.html#softraid
 
 j.
 

Yes, thanks.

Dhu

 
 


- -- 

https://babayaga.neotext.ca/PublicKeys/Duncan_Patton_a_Campbell_pubkey.txt

Ne obliviscaris, vix ea nostra voco.
iF4EAREIAAYFAlVeIGkACgkQiY6AzzR1lzyinQD/ctYpvbDfGaJFycSXtad6bJAj
dnQP6WgRxVdmzbfgLCoA/3hWCxRCojh9eElXseXK/fj8Ibt81V3/vRscGf/0iuAc
=9ksL
-END PGP SIGNATURE-



Re: softraid crypto root with serial console?

2014-11-06 Thread John Merriam

On Thu, 6 Nov 2014, Joel Sing wrote:

On Thu, 6 Nov 2014, TJ wrote:

On Wed, Nov 05, 2014 at 11:33:21PM -0500, Ted Unangst wrote:

If you look in sys/arch/amd64/stand/libsa/bioscons.c you'll see two
functions, pc_probe and com_probe, which set cn-cn_pri. You'll need
to swap MIDPRI and LOWPRI and rebuild, then rerun installboot.



Alternatively, you could make a small unencrypted a partition on the
disk with just an /etc/boot.conf file that contains the following:

set tty com0
boot sr0a:/bsd

Then do the crypto softraid install to another partition and it should
boot like you'd expect.

See: http://permalink.gmane.org/gmane.os.openbsd.misc/206003


Thanks for that link, I searched and couldn't find anything.



For now this is what I would recommend...


Hmmm.  Looking at the code, I'm going to guess the answer is no, but is 
it expected that this issue of changing the console with crypto root is 
going to be addressed soon (like maybe in one of the next 3 or 4 
releases)?


If it is expected that there will be a solution relatively soon I will 
change the code and recompile for now.  If not, I'll go with the small a 
partition.


Note that I am not expecting that it be fixed, just wondering if it is 
planned.  It is a bit of a conundrum.  I like not having a /boot hanging 
out there like they do in other OSes, but how to change the boot 
parameters without access to a filesystem...


Thanks!

--

John Merriam



  1   2   3   4   >