OT: OpenBSD NFS Performance

2018-11-17 Thread Predrag Punosevac
Jordan Geoghegan wrote:
> On 11/17/18 10:53, Predrag Punosevac wrote:
> > On Sat, Nov 17, 2018 at 01:35:05AM +0100, Willi Rauffer wrote:
> >
> >> Hello,
> >>
> >> we want to make one logical volume out of several physical volumes,
> but there is no \
> >> LVM (Logical Volume Manager) in OpenBSD!
> >> Will there be a LVM in OpenBSD in the future?
> >>
> >> Thanks...Willi Rauffer, UNOBank.org
> > P.S. OpenBSD's NFSv3 server and client implementation is pretty slow
> so
> > that begs the question how you are going to access that data pool.
> >
> I have an OpenBSD 6.3 NFS server, and it is able to achieve gigabit line
> 
> speed no problem. I've transferred hundreds of terrabytes through that
> thing and it hasn't let me down once. Most of the NFS clients 
> connected to it are CentOS 7 machines, and after a bit of fiddling, 
> line speed was achieved without issue.

I can believe that as the NFS read performance is primarily
client-driven.

> The OpenBSD NFS client does seem to be a a tad slow though, and much
> fiddling was required to get anywhere close to line speed with it.

As I already said NFS read performance is primarily client-driven.
Setting the read-ahead (for example, mount_nfs -a 4) is the biggest
performance driver for reads. Unsurprisingly OpenBSD defaults to -a 1.

predrag@oko$ more /etc/fstab|grep nfs
192.168.3.2:/data/nfs/hammer nfs rw,noatime,-a=4 0 0

Most of what I know about the topics was initiated by this wonderful
post of Matt Dillon

https://marc.info/?l=openbsd-misc=146130062830832=2

I would be very interested to learn what you have done to get OpenBSD
NFS client speed close to 1 Gigabit (although at work I only use 10
Gigabit or InfiniBand gear so even 1 Gigabit is only of interest for my
home setup).

Cheers,
Predrag

P.S. Just for the record I would much rather see WAPBL ported and fully
functional on OpenBSD than NFS performance improvment or even HAMMER2.
WAPBL would actually make a real difference for my firewall/embedded
OpenBSD deployments. HAMMER2 would be nice to have on my OpenBSD laptop
but I can leave without it. 



Error on 6.4 Errata page

2018-11-17 Thread schwack
On: 

https://www.openbsd.org/errata64.html

line 9: 

https://www.openbsd.org/errata63.html;> 

should be:

https://www.openbsd.org/errata64.html;>



A small newfs puzzle.

2018-11-17 Thread R. Clayton
I'm on this

  # uname -a
  OpenBSD AngkorWat.rclayton.net 6.4 GENERIC.MP#364 amd64

  # 

and I'm trying to write some file systems on this

  # disklabel -p g sd1
  # /dev/rsd1c:
  type: SCSI
  disk: SCSI disk
  label: Rugged FW USB3  
  duid: 7e82b7f3472419e3
  flags:
  bytes/sector: 512
  sectors/track: 63
  tracks/cylinder: 255
  sectors/cylinder: 16065
  cylinders: 243201
  total sectors: 3907029168 # total bytes: 1863.0G
  boundstart: 0
  boundend: 3907029168
  drivedata: 0 

  16 partitions:
  #size   offset  fstype [fsize bsize   cpg]
c:  1863.0G0  unused
i:   600.0G  128  4.2BSD   8192 65536 1
j:   600.0G   1258291328  4.2BSD   8192 65536 1
k:   663.0G   2516582528  4.2BSD   8192 65536 1

  # 

so I do this

  # newfs sd1i  
   
  /dev/rsd1i: 614399.9MB in 1258291072 sectors of 512 bytes 

  189 cylinder groups of 3264.88MB, 52238 blocks, 104960 inodes each

  super-block backups (for fsck -b #) at:   

   128, 6686592, 13373056, 20059520, 26745984, 33432448, 40118912, 46805376, 
53491840,  
   60178304, 66864768, 73551232, 80237696, 86924160, 93610624, 100297088, 
106983552,
   113670016, 120356480, 127042944, 133729408, 140415872, 147102336, 153788800, 
160475264,  
   167161728, 173848192, 180534656, 187221120, 193907584, 200594048, 207280512, 
213966976,  
   220653440, 227339904, 234026368, 240712832, 247399296, 254085760, 260772224, 
267458688,  
   274145152, 280831616, 287518080, 294204544, 300891008, 307577472, 314263936, 
320950400,  
   327636864, 334323328, 341009792, 347696256, 354382720, 361069184, 367755648, 
374442112,  
   381128576, 387815040, 394501504, 401187968, 407874432, 414560896, 421247360, 
427933824,  
   434620288, 441306752, 447993216, 454679680, 461366144, 468052608, 474739072, 
481425536,  
   488112000, 494798464, 501484928, 508171392, 514857856, 521544320, 528230784, 
534917248,
   541603712, 548290176, 554976640, 561663104, 568349568, 575036032, 581722496, 
588408960,
   595095424, 601781888, 608468352, 615154816, 621841280, 628527744, 635214208, 
641900672,
   648587136, 655273600, 661960064, 668646528, 675332992, 682019456, 688705920, 
695392384,
   702078848, 708765312, 715451776, 722138240, 728824704, 735511168, 742197632, 
748884096,
   755570560, 762257024, 768943488, 775629952, 782316416, 789002880, 795689344, 
802375808,
   809062272, 815748736, 822435200, 829121664, 835808128, 842494592, 849181056, 
855867520,
   862553984, 869240448, 875926912, 882613376, 889299840, 895986304, 902672768, 
909359232,
   916045696, 922732160, 929418624, 936105088, 942791552, 949478016, 956164480, 
962850944,
   969537408, 976223872, 982910336, 989596800, 996283264, 1002969728, 
1009656192,
   1016342656, 1023029120, 1029715584, 1036402048, 1043088512, 1049774976, 
1056461440,
   1063147904, 1069834368, 1076520832, 1083207296, 1089893760, 1096580224, 
1103266688,
   1109953152, 1116639616, 1123326080, 1130012544, 1136699008, 1143385472, 
1150071936,
   1156758400, 1163444864, 1170131328, 1176817792, 1183504256, 1190190720, 
1196877184,
   1203563648, 1210250112, 1216936576, 1223623040, 1230309504, 1236995968, 
1243682432,
   1250368896, 1257055360,
  newfs: ioctl (WDINFO): Input/output error
  newfs: /dev/rsd1i: can't rewrite disk label

  #

and that doesn't look too cool, so I try this

  # fsck /dev/sd1i
  ** /dev/rsd1i
  ** File system is clean; not checking

  #

Huh.  But newfs is cheap, so I try it again

  # newfs sd1i
  /dev/rsd1i: 614399.9MB in 1258291072 sectors of 512 bytes
  189 cylinder groups of 3264.88MB, 52238 blocks, 104960 inodes each
  super-block backups (for fsck -b #) at:
   128, 6686592, 13373056, 20059520, 26745984, 33432448, 40118912, 46805376, 
53491840,
   60178304, 66864768, 73551232, 80237696, 86924160, 93610624, 100297088, 
106983552,
   113670016, 120356480, 127042944, 133729408, 140415872, 147102336, 153788800, 
160475264,
   167161728, 173848192, 180534656, 187221120, 193907584, 200594048, 207280512, 
213966976,
   220653440, 227339904, 234026368, 240712832, 247399296, 254085760, 260772224, 
267458688,
   274145152, 280831616, 287518080, 294204544, 300891008, 307577472, 314263936, 
320950400,
   327636864, 334323328, 341009792, 347696256, 354382720, 361069184, 367755648, 
374442112,
   381128576, 387815040, 394501504, 401187968, 407874432, 414560896, 421247360, 
427933824,
   434620288, 441306752, 447993216, 454679680, 461366144, 468052608, 474739072, 
481425536,
   488112000, 494798464, 501484928, 508171392, 514857856, 521544320, 528230784, 
534917248,
   541603712, 548290176, 554976640, 561663104, 568349568, 575036032, 581722496, 
588408960,
   595095424, 601781888, 608468352, 615154816, 621841280, 628527744, 635214208, 
641900672,
   

Re: Missing LVM (Logical Volume Manager)

2018-11-17 Thread Jordan Geoghegan




On 11/17/18 10:53, Predrag Punosevac wrote:

On Sat, Nov 17, 2018 at 01:35:05AM +0100, Willi Rauffer wrote:


Hello,

we want to make one logical volume out of several physical volumes, but there 
is no \
LVM (Logical Volume Manager) in OpenBSD!
Will there be a LVM in OpenBSD in the future?

Thanks...Willi Rauffer, UNOBank.org

P.S. OpenBSD's NFSv3 server and client implementation is pretty slow so
that begs the question how you are going to access that data pool.

I have an OpenBSD 6.3 NFS server, and it is able to achieve gigabit line 
speed no problem. I've transferred hundreds of terrabytes through that 
thing and it hasn't let me down once. Most of the NFS clients connected 
to it are CentOS 7 machines, and after a bit of fiddling, line speed was 
achieved without issue. The OpenBSD NFS client does seem to be a a tad 
slow though, and much fiddling was required to get anywhere close to 
line speed with it.




Re: OpenBSD migration

2018-11-17 Thread Ken M
On Sat, Nov 17, 2018 at 10:42:57PM +0100, Ingo Schwarze wrote:
> Hi Martin,
> 
> Martin Sukany wrote on Sat, Nov 17, 2018 at 09:13:15PM +0100:
> 
> > I want to migrate OpenBSD 6.4 (stable) from VM to bare metal. I see, as 
> > usual, two options:
> > 
> > 1) install everything from scratch
> > 2) create some flashimage (I did such thing on Solaris few years ago) 
> > and apply the image on new hw.
> 
> I'd recommend option 1), reinstall.
> 
> I have no idea whether or not option 2) will work.  It may or may not.
> If it doesn't, you end up doing a reinstall anyway, and nobody will be
> interested in the reasons why it didn't work for you.  Such a thing
> simply isn't supported.
> 
> Yours,
>   Ingo
> 
I second reinstall. If you are concerned about setting things up the same I tend
to do as couple tricks to make my setup portable. I have a tgz I created of
important ~/.config items, mostly related to my openbox setup. I have git repos
for my vim, mutt, fish, and ~/.local/bin items. I have a tgz of important /etc
items, particularly vm.conf virtual host info for vmm, pf, etc.

Lastly I have a master file of my installed packages created from pkg_info
(forget the specific flags) but you can feel that file to a pkg add and let it
install everything for you.

Ken



Re: OpenBSD migration

2018-11-17 Thread Ingo Schwarze
Hi Martin,

Martin Sukany wrote on Sat, Nov 17, 2018 at 09:13:15PM +0100:

> I want to migrate OpenBSD 6.4 (stable) from VM to bare metal. I see, as 
> usual, two options:
> 
> 1) install everything from scratch
> 2) create some flashimage (I did such thing on Solaris few years ago) 
> and apply the image on new hw.

I'd recommend option 1), reinstall.

I have no idea whether or not option 2) will work.  It may or may not.
If it doesn't, you end up doing a reinstall anyway, and nobody will be
interested in the reasons why it didn't work for you.  Such a thing
simply isn't supported.

Yours,
  Ingo



OpenBSD migration

2018-11-17 Thread Martin Sukany

Hi,

I want to migrate OpenBSD 6.4 (stable) from VM to bare metal. I see, as 
usual, two options:


1) install everything from scratch
2) create some flashimage (I did such thing on Solaris few years ago) 
and apply the image on new hw.


I'd be glad for any personal experience / recommendations.

NOTE: Server is not so important so downtime is not a problem here

M>



Re: Missing LVM (Logical Volume Manager)

2018-11-17 Thread Misc User

On 11/17/2018 10:53 AM, Predrag Punosevac wrote:

On Sat, Nov 17, 2018 at 01:35:05AM +0100, Willi Rauffer wrote:


Hello,

we want to make one logical volume out of several physical volumes, but there 
is no \
LVM (Logical Volume Manager) in OpenBSD!
Will there be a LVM in OpenBSD in the future?

Thanks...Willi Rauffer, UNOBank.org


There are people on this mailing list infinitely more knowledgeable and
experienced than I both with Linux and BSDs so they will correct me
claims if necessary.

In my experience using LVM2 (LVM is depreciated) to create software RIAD
even on Linux (I have the most experience with RHEL) is a bad idea
unless you belive at the RedHat PR BS. Most people myself included if
they have to use softraid on Linux prefer to do it from mdadm (softraid
discipline for Linux and then perhaps put LVM on the top of it although
I fail to see the purpose). In the lieu of the lack of modern file
system on Linux (Btrfs is a vaporware and ZFS is an external kernel
module which lags many version numbers behind Solaris and FreeBSD) some
PR guys from RedHat started even advertising LVM2 snapshots as a real
snapshots. That is pure BS as they are very expensive operation and for
all practical purposes useless on the legacy file system XFS which is
really the only really stable FS on Linux. If you are storing your data
on Linux you should be using Hardware RAID and XFS.

Not having LVM2 on OpenBSD is a feature not a bug!  Dragon Fly BSD has
partial not really functional implementation of LVM that I am quite
familiar with. IIRC NetBSD has LVM2 implementation but it is hard to me
to say usefulness of it as I have never used.

As somebody mentioned. OpenBSD softraid can be used to manage logical
volumes

oko# bioctl softraid0
Volume  Status   Size Device
softraid0 0 Online  2000396018176 sd3 RAID1
   0 Online  2000396018176 0:0.0   noencl 
   1 Online  2000396018176 0:1.0   noencl 

but it is quite crude and it will take you more than a week to rebuild
simple 10 TB mirror. IMHO softraid is far more useful for drive
encryption on your laptop for example than for data storage. I don't
have any experience with Hardware RAID cards on OpenBSD (Areca should
have really good support) which I do prefer over softraid (but not over
ZFS). However OpenBSD lacks modern file system (read HAMMER or HAMMER2)
to take advantage of such set up.


Best,
Predrag

P.S. OpenBSD's NFSv3 server and client implementation is pretty slow so
that begs the question how you are going to access that data pool.



I concur, software raid is a bug, not a feature, especially since if you 
truly need RAID, hardware cards are fairly cheap.  But if you can't 
afford such a card,  fairly reliable method is to just replicate the 
/altroot scheme with all your partitions.  Even just using an external 
drive that you do periodic backups to is more reliable than software 
raid.  For the most part, I've actually seen more failures with softraid 
than just independent disk even between systems where the only 
difference is the serial number being slightly incremented (sofraid, no 
matter how well coded still causes far more disk usage than a normal 
un-raided disk).


Although, really, if you need reliability, it is much cheaper, less 
effort intensive, and more reliable to just grab a bunch of low-end 
systems and cluster them together.  I have a small cluster 5 crusty old 
SunFire V120 boxes that've been running OpenBSD for nearly 10 years as 
my firewalls, I'm just running with a single disk in each.  Each of them 
has failed at least a couple items over the years (failed disks, RAM 
modules, motherboards, power supplies, etc), but collectively they've 
had 100% reliability, even counting time for required reboots for 
upgrades, patches, and other maintenance


Overall, I've found that software raid systems are only good for 
supporting whole-disk crypto and nothing else.  Otherwise you are just 
adding an unnecessary performance penalty, kills your disks faster, and 
makes it much more a pain in the ass to recover from.


-C
.




Re: Missing LVM (Logical Volume Manager)

2018-11-17 Thread Predrag Punosevac
On Sat, Nov 17, 2018 at 01:35:05AM +0100, Willi Rauffer wrote:

> Hello,
> 
> we want to make one logical volume out of several physical volumes, but there 
> is no \
> LVM (Logical Volume Manager) in OpenBSD! 
> Will there be a LVM in OpenBSD in the future?
> 
> Thanks...Willi Rauffer, UNOBank.org

There are people on this mailing list infinitely more knowledgeable and
experienced than I both with Linux and BSDs so they will correct me
claims if necessary. 

In my experience using LVM2 (LVM is depreciated) to create software RIAD
even on Linux (I have the most experience with RHEL) is a bad idea
unless you belive at the RedHat PR BS. Most people myself included if
they have to use softraid on Linux prefer to do it from mdadm (softraid
discipline for Linux and then perhaps put LVM on the top of it although
I fail to see the purpose). In the lieu of the lack of modern file
system on Linux (Btrfs is a vaporware and ZFS is an external kernel
module which lags many version numbers behind Solaris and FreeBSD) some
PR guys from RedHat started even advertising LVM2 snapshots as a real
snapshots. That is pure BS as they are very expensive operation and for
all practical purposes useless on the legacy file system XFS which is
really the only really stable FS on Linux. If you are storing your data
on Linux you should be using Hardware RAID and XFS. 

Not having LVM2 on OpenBSD is a feature not a bug!  Dragon Fly BSD has
partial not really functional implementation of LVM that I am quite
familiar with. IIRC NetBSD has LVM2 implementation but it is hard to me
to say usefulness of it as I have never used. 

As somebody mentioned. OpenBSD softraid can be used to manage logical
volumes

oko# bioctl softraid0
Volume  Status   Size Device  
softraid0 0 Online  2000396018176 sd3 RAID1 
  0 Online  2000396018176 0:0.0   noencl 
  1 Online  2000396018176 0:1.0   noencl 

but it is quite crude and it will take you more than a week to rebuild
simple 10 TB mirror. IMHO softraid is far more useful for drive
encryption on your laptop for example than for data storage. I don't
have any experience with Hardware RAID cards on OpenBSD (Areca should
have really good support) which I do prefer over softraid (but not over
ZFS). However OpenBSD lacks modern file system (read HAMMER or HAMMER2)
to take advantage of such set up.


Best,
Predrag  

P.S. OpenBSD's NFSv3 server and client implementation is pretty slow so
that begs the question how you are going to access that data pool. 



Re: File sets on internet exposed server

2018-11-17 Thread Aham Brahmasmi
Thank you Robert and Stuart for your helpful responses.

> Skipping X and games is usually safe. The compilers might be a bad
> idea unless you're only installing software from ports.

Yes, current plan is to install only from ports as of now.
 
> If you aren't using those packages which use libraries from xbase, you
> *could* skip installing it, but make a note of it so that if you later
> run pkg_add and get weird errors about missing libraries, you know what
> you've done.

Thanks for the detailed explanation. Will make a note. Most likely, the
programs installed from ports should be fine.

So, it is: -comp* -game* -x*

Regards,
ab
-|-|-|-|-|-|-|--



Re: Missing LVM (Logical Volume Manager)

2018-11-17 Thread Otto Moerbeek
On Sat, Nov 17, 2018 at 01:35:05AM +0100, Willi Rauffer wrote:

> Hello,
> 
> we want to make one logical volume out of several physical volumes, but there 
> is no LVM (Logical Volume Manager) in OpenBSD!
> 
> Will there be a LVM in OpenBSD in the future?
> 
> Thanks...Willi Rauffer, UNOBank.org

Probably not, but we have something that can do some of what LVM
does: sofftraid.

BTW, bugs@ is not the proper mailing list for this question.
Redirected to misc@

-Otto