Re: radeondrm failure on amd64 but not on i386?

2018-11-18 Thread Andy Bradford
Thus said Jonathan Gray on Sat, 17 Nov 2018 14:08:53 +1100:

> There are many  ways of getting an  atom bios it would  be helpfull to
> know which method is having trouble.

Thanks for the suggestion. Here's the additional output provided by your
patch:

radeon_atrm_get_bios false
radeon_acpi_vfct_bios false
igp_read_bios_from_vram false
radeon_read_bios false
radeon_read_disabled_bios true
drm:pid0:r600_init *ERROR* Expecting atombios for R600 GPU
drm:pid0:radeondrm_attachhook *ERROR* Fatal error during GPU init
[TTM] Memory type 2 has not been initialized
drm0 detached
radeondrm0 detached

Thanks,

Andy
-- 
TAI64 timestamp: 40005bf24e83




BGPlooking glass in 1 RDOMAIN BGPD in another RDomain

2018-11-18 Thread Tom Smyth
Hello,

I have a Looking glass that I want to run on a management interface
that is in a separate rdomain to the BGP router ...

is there  away we can have the the bgprocess in one RDomain  (main Rdomain)
 and  the the bgp looking glass in another rdomain...

so currently i have httpd in  Rdomain 240
slowcgi is running in rdomain 0

ping works but not the bgp commands...


I tried setting slowcgi flags but they just didn't take


do I need to run slowcgi with route -T240 exec slowcgi  ?
(which would put the entire  bgplg and the bgp collector on the same Rdomain..
any suggestions are welcome ...thanks



Re: Missing LVM (Logical Volume Manager)

2018-11-18 Thread Misc User

On 11/18/2018 2:54 AM, Stuart Henderson wrote:

On 2018-11-17, Misc User  wrote:

I concur, software raid is a bug, not a feature, especially since if you
truly need RAID, hardware cards are fairly cheap.


Never had a RAID controller die?



I've had plenty die, but the number of HW raid chips die on me is much,
much lower than the times I've had software raid fail.  Plus HW raid 
chips allow for full disk encryption, which is far more important to me 
than worrying about a system going down due to a failed disk (I keep 
backups anyway).


But the, for the most part, I don't bother with RAID in any form and 
just opt for redundant systems instead.  Carp+rsync on cheap boxes has 
provided for a much more stable platform than trying to do 
component-level redundancy.




Intel Celeron SoC support

2018-11-18 Thread Heppler, J. Scott

I'm running amd64-current on an ASrock J3355M and recall a similar issue
installing from a USB thumb drive.  My suspicion was that the BIOS
treated the drive as an unknown input device like a keyboard or mouse.

I was able to install from a DVD/CD drive.  If you do not have one, you
may be able to a PXE install or Disable the legacy usb keyboard/mouse
settings in the BIOS.

The other issue I had was frequent lockups due to buggy C-state power
savings.  It works fine with Bios setting C-state=1

On 2018-11-14, Andrew Lemin wrote:



Hi,

I am running an ASRock J4105B-ITX board and wanting to run OpenBSD on this.
https://www.asrock.com/MB/Intel/J4105B-ITX/index.asp#BIOS

It boots up, and at the 'boot>' prompt I can use the keyboard find.

However after it boots up, the keyboard stops working, and no disks are
found by the installer (used auto_install to send test commands).
It appears that there is no chipset support, for the Intel Celeron J4105
CPU from what I can work out.

To test that it was working fine and is just OpebBSD which is not working,
I installed Linux and have included the dmesg below (from Linux).
I cannot run a dmesg from the OpenBSD installer as I cannot use the
keyboard etc.

Will support come for this SoC architecture? Or am I better of selling this
board?

Think its a Gemini Lake SoC Chipset;


--
J. Scott Heppler



Re: amd64: installboot on RAID 1 cRYPTO

2018-11-18 Thread Stefan Sperling
On Sun, Nov 18, 2018 at 04:38:06PM +0100, Martin Sukany wrote:
> Hi,
> 
> probably I'm overlooking something ...
> 
> I have following disk layout:
> sd0, sd1 - physical drives
> sd2 - RAID 1 array with only "a" partiton on which CRYPTO device is created,
> sd3 - used as "connection point" for crypted device.
> 
> So, finally the system is installed on sd3X partitions.
> 
> Problem comes when I want to boot, I tried
> installboot sd2 /usr/mdec/biosboot /usr/mdec/boot
> 
> After reboot I see the bootloader prompt but not able to boot

softraid does not support nested disciplines.
What you're seeing is a side-effect of this.

To make your use case work, somebody would need to write a
RAID1C (raid1 + crypto in one) discipline, or make nested
disciplines work somehow.




Re: OpenBSD migration

2018-11-18 Thread Martin Sukany

Thanks Guys,

I decided to go trough fresh installation ...

M>

On 11/18/18 3:23 PM, Mitchell Riedstra wrote:

Hi Martin,

On Sat, Nov 17, 2018 at 3:18 PM Martin Sukany  wrote:

I want to migrate OpenBSD 6.4 (stable) from VM to bare metal. I see, as
usual, two options:

1) install everything from scratch
2) create some flashimage (I did such thing on Solaris few years ago)
and apply the image on new hw.

OpenBSD is in many ways just files on a disk and it's possible to migrate from
a VM to bare metal with a dump, tarball, rsync or similar. This will
also require a
somewhat in-depth understanding of the boot process, and setting up the fstab
properly and perhaps other OpenBSD specific things I do not recall at this time.

It's certainly possible to get this understanding by reading the installer
scripts. I've had to do this on FreeBSD and Linux to migrate between hosting
providers w/o downtime. It's painful and filled with some trial and error . For
simple setups often takes longer than a re-install.

If downtime isn't a major concern just back-up the important things
and re-install.

As others have mentioned getting the list of installed packages is doable, and
even mentioned int the FAQ:

https://www.openbsd.org/faq/faq15.html#PkgDup

I hope this helps!

--
Mitch


--
Martin Sukany
UNIX Engineer - Solaris / Linux / OpenBSD L3 specialist
www.sukany.cz



amd64: installboot on RAID 1 cRYPTO

2018-11-18 Thread Martin Sukany

Hi,

probably I'm overlooking something ...

I have following disk layout:
sd0, sd1 - physical drives
sd2 - RAID 1 array with only "a" partiton on which CRYPTO device is 
created,

sd3 - used as "connection point" for crypted device.

So, finally the system is installed on sd3X partitions.

Problem comes when I want to boot, I tried
installboot sd2 /usr/mdec/biosboot /usr/mdec/boot

After reboot I see the bootloader prompt but not able to boot, it 
screems that

can't find sr0a:/bsd kernel

If I understand it correctly, I'm booting from RAID now but I'm not able 
decrypt the CRYPTO device.


NOTE: Using passphrase, not key, to encrypt CRYPTO device.

I tried also install bootloader to sd2a but without success ...


Any ideas?

Thanks

M>



Re: OpenBSD migration

2018-11-18 Thread Mitchell Riedstra
Hi Martin,

On Sat, Nov 17, 2018 at 3:18 PM Martin Sukany  wrote:
> I want to migrate OpenBSD 6.4 (stable) from VM to bare metal. I see, as
> usual, two options:
>
> 1) install everything from scratch
> 2) create some flashimage (I did such thing on Solaris few years ago)
> and apply the image on new hw.

OpenBSD is in many ways just files on a disk and it's possible to migrate from
a VM to bare metal with a dump, tarball, rsync or similar. This will
also require a
somewhat in-depth understanding of the boot process, and setting up the fstab
properly and perhaps other OpenBSD specific things I do not recall at this time.

It's certainly possible to get this understanding by reading the installer
scripts. I've had to do this on FreeBSD and Linux to migrate between hosting
providers w/o downtime. It's painful and filled with some trial and error . For
simple setups often takes longer than a re-install.

If downtime isn't a major concern just back-up the important things
and re-install.

As others have mentioned getting the list of installed packages is doable, and
even mentioned int the FAQ:

https://www.openbsd.org/faq/faq15.html#PkgDup

I hope this helps!

--
Mitch



Re: Missing LVM (Logical Volume Manager)

2018-11-18 Thread Stuart Henderson
On 2018-11-17, Misc User  wrote:
> I concur, software raid is a bug, not a feature, especially since if you 
> truly need RAID, hardware cards are fairly cheap.

Never had a RAID controller die?




Re: A small newfs puzzle.

2018-11-18 Thread Otto Moerbeek
On Sun, Nov 18, 2018 at 09:37:53AM +0100, Otto Moerbeek wrote:

> On Sat, Nov 17, 2018 at 07:55:33PM -0500, R. Clayton wrote:
> 
> > I'm on this
> > 
> >   # uname -a
> >   OpenBSD AngkorWat.rclayton.net 6.4 GENERIC.MP#364 amd64
> > 
> >   # 
> > 
> > and I'm trying to write some file systems on this
> > 
> >   # disklabel -p g sd1
> >   # /dev/rsd1c:
> >   type: SCSI
> >   disk: SCSI disk
> >   label: Rugged FW USB3  
> >   duid: 7e82b7f3472419e3
> >   flags:
> >   bytes/sector: 512
> >   sectors/track: 63
> >   tracks/cylinder: 255
> >   sectors/cylinder: 16065
> >   cylinders: 243201
> >   total sectors: 3907029168 # total bytes: 1863.0G
> >   boundstart: 0
> ^
> You didn't run fdisk -i on the disk, right?
> 
>   -Otto

Some more explanation:

After a successful run, newfs writes some meta-info to the disklabel.
This information is used in case of damage to the superblock to find
alternate superbocks. If you do not run fdisk -i before labelling on a
arch that requires it, the updating of the label fails and you will
miss some info needed by fsck in case of disk trouble.

-Otto



Re: A small newfs puzzle.

2018-11-18 Thread Otto Moerbeek
On Sat, Nov 17, 2018 at 07:55:33PM -0500, R. Clayton wrote:

> I'm on this
> 
>   # uname -a
>   OpenBSD AngkorWat.rclayton.net 6.4 GENERIC.MP#364 amd64
> 
>   # 
> 
> and I'm trying to write some file systems on this
> 
>   # disklabel -p g sd1
>   # /dev/rsd1c:
>   type: SCSI
>   disk: SCSI disk
>   label: Rugged FW USB3  
>   duid: 7e82b7f3472419e3
>   flags:
>   bytes/sector: 512
>   sectors/track: 63
>   tracks/cylinder: 255
>   sectors/cylinder: 16065
>   cylinders: 243201
>   total sectors: 3907029168 # total bytes: 1863.0G
>   boundstart: 0
^
You didn't run fdisk -i on the disk, right?

-Otto


>   boundend: 3907029168
>   drivedata: 0 
>   16 partitions:
>   #size   offset  fstype [fsize bsize   cpg]
> c:  1863.0G0  unused
> i:   600.0G  128  4.2BSD   8192 65536 1
> j:   600.0G   1258291328  4.2BSD   8192 65536 1
> k:   663.0G   2516582528  4.2BSD   8192 65536 1
> 
>   # 
> 
> so I do this
> 
>   # newfs sd1i
>  
>   /dev/rsd1i: 614399.9MB in 1258291072 sectors of 512 bytes   
>   
>   189 cylinder groups of 3264.88MB, 52238 blocks, 104960 inodes each  
>   
>   super-block backups (for fsck -b #) at: 
>   
>128, 6686592, 13373056, 20059520, 26745984, 33432448, 40118912, 46805376, 
> 53491840,  
>60178304, 66864768, 73551232, 80237696, 86924160, 93610624, 100297088, 
> 106983552,
>113670016, 120356480, 127042944, 133729408, 140415872, 147102336, 
> 153788800, 160475264,  
>167161728, 173848192, 180534656, 187221120, 193907584, 200594048, 
> 207280512, 213966976,  
>220653440, 227339904, 234026368, 240712832, 247399296, 254085760, 
> 260772224, 267458688,  
>274145152, 280831616, 287518080, 294204544, 300891008, 307577472, 
> 314263936, 320950400,  
>327636864, 334323328, 341009792, 347696256, 354382720, 361069184, 
> 367755648, 374442112,  
>381128576, 387815040, 394501504, 401187968, 407874432, 414560896, 
> 421247360, 427933824,  
>434620288, 441306752, 447993216, 454679680, 461366144, 468052608, 
> 474739072, 481425536,  
>488112000, 494798464, 501484928, 508171392, 514857856, 521544320, 
> 528230784, 534917248,
>541603712, 548290176, 554976640, 561663104, 568349568, 575036032, 
> 581722496, 588408960,
>595095424, 601781888, 608468352, 615154816, 621841280, 628527744, 
> 635214208, 641900672,
>648587136, 655273600, 661960064, 668646528, 675332992, 682019456, 
> 688705920, 695392384,
>702078848, 708765312, 715451776, 722138240, 728824704, 735511168, 
> 742197632, 748884096,
>755570560, 762257024, 768943488, 775629952, 782316416, 789002880, 
> 795689344, 802375808,
>809062272, 815748736, 822435200, 829121664, 835808128, 842494592, 
> 849181056, 855867520,
>862553984, 869240448, 875926912, 882613376, 889299840, 895986304, 
> 902672768, 909359232,
>916045696, 922732160, 929418624, 936105088, 942791552, 949478016, 
> 956164480, 962850944,
>969537408, 976223872, 982910336, 989596800, 996283264, 1002969728, 
> 1009656192,
>1016342656, 1023029120, 1029715584, 1036402048, 1043088512, 1049774976, 
> 1056461440,
>1063147904, 1069834368, 1076520832, 1083207296, 1089893760, 1096580224, 
> 1103266688,
>1109953152, 1116639616, 1123326080, 1130012544, 1136699008, 1143385472, 
> 1150071936,
>1156758400, 1163444864, 1170131328, 1176817792, 1183504256, 1190190720, 
> 1196877184,
>1203563648, 1210250112, 1216936576, 1223623040, 1230309504, 1236995968, 
> 1243682432,
>1250368896, 1257055360,
>   newfs: ioctl (WDINFO): Input/output error
>   newfs: /dev/rsd1i: can't rewrite disk label
> 
>   #
> 
> and that doesn't look too cool, so I try this
> 
>   # fsck /dev/sd1i
>   ** /dev/rsd1i
>   ** File system is clean; not checking
> 
>   #
> 
> Huh.  But newfs is cheap, so I try it again
> 
>   # newfs sd1i
>   /dev/rsd1i: 614399.9MB in 1258291072 sectors of 512 bytes
>   189 cylinder groups of 3264.88MB, 52238 blocks, 104960 inodes each
>   super-block backups (for fsck -b #) at:
>128, 6686592, 13373056, 20059520, 26745984, 33432448, 40118912, 46805376, 
> 53491840,
>60178304, 66864768, 73551232, 80237696, 86924160, 93610624, 100297088, 
> 106983552,
>113670016, 120356480, 127042944, 133729408, 140415872, 147102336, 
> 153788800, 160475264,
>167161728, 173848192, 180534656, 187221120, 193907584, 200594048, 
> 207280512, 213966976,
>220653440, 227339904, 234026368, 240712832, 247399296, 254085760, 
> 260772224, 267458688,
>274145152, 280831616, 287518080, 294204544, 300891008, 307577472, 
> 314263936, 320950400,
>327636864, 334323328, 341009792, 347696256, 354382720, 361069184, 
> 367755648, 374442112,
>381128576, 387815040, 394501504, 401187968, 407874432, 414560896,