vlan autoconf fails to conf at boot

2022-04-29 Thread George Morgan
I created a hostname.vlan10 file which has a single line:

inet autoconf parent vge0 vnetid 10 lladdr ...

At boot the interface fails to configure but after boot I can login to the 
console and run "doas sh /etc/netstart" and the interface will configure.

What am I doing wrong?  Do I need to add something to rc.conf.local to force 
the parent to configure first?  The parent (vge0) has a static IPv4 address.

-- 
  George Morgan
  gmor...@fastmail.fm



Re: disk geometry issues when trying to set up encrypted partition

2010-06-17 Thread George Morgan

Quoting Otto Moerbeek o...@drijf.net:


On Wed, Jun 16, 2010 at 05:45:40PM -0400, George Morgan wrote:


Quoting Harry Palmer tumblew...@fast-mail.org:

Hi there.

I'm fairly new to openbsd and I'm hoping someone with better
understanding than me of how its disk handling works can help.

Beginning my effort to encrypt a 300GB drive in a 64bit Ultrasparc,
I followed these initial steps:

1. used disklabel to create a single slice a on the drive

2. made a file system with newfs (is it necessary to have so many
   backup superblocks?)

3. mounted sd2a on /home/cy and touched it with an empty file
 /home/cy/cryptfile

4. zeroed out the file (and efectively the drive) with
 dd if=/dev/zero of=/home/cy/cryptfile bs=512


Here's the (eventual!) output of (4):

 /home/cy: write failed, file system is full
 dd: /home/cy/cryptfile: No space left on device
 576520353+0 records in
 576520352+0 records out
 295178420224 bytes transferred in 19810.722 secs (14899932 bytes/sec)



Now I have:

 # disklabel sd2a
 # /dev/rsd2a:
 type: SCSI
 disk: SCSI disk
 label: MAW3300NC
 flags: vendor
 bytes/sector: 512
 sectors/track: 930
 tracks/cylinder: 8
 sectors/cylinder: 7440
 cylinders: 13217
 total sectors: 585937500
 rpm: 10025
 interleave: 1
 boundstart: 0
 boundend: 585937500
 drivedata: 0

 16 partitions:
 #size   offset  fstype [fsize bsize  cpg]
   a:5859372000  4.2BSD   2048 163841
   c:5859375000  unused


and:

 # ls -l /home/cy
 total 576661216
 -rw-r--r--  1 root  wheel  295178420224 Jun 16 03:39 cryptfile


and:

 # df -h
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/sd0a 1007M   44.8M912M 5%/
 /dev/sd0k  247G2.0K235G 0%/home
 /dev/sd0d  3.9G6.0K3.7G 0%/tmp
 /dev/sd0f  2.0G559M1.3G29%/usr
 /dev/sd0g 1007M162M795M17%/usr/X11R6
 /dev/sd0h  5.9G212K5.6G 0%/usr/local
 /dev/sd0j  2.0G2.0K1.9G 0%/usr/obj
 /dev/sd0i  2.0G2.0K1.9G 0%/usr/src
 /dev/sd0e  7.9G7.7M7.5G 0%/var
 /dev/sd2a  275G275G  -13.7G   105%/home/cy



I have no understanding of this. I've never seen a df output
that tells me I'm using 13GB more space than the drive is
capable of holding.

I ask here because there's obviously potential for me to lose
data somewhere down the line. I'll be grateful if anyone can
explain where I've gone wrong.

I've seen the greater than 100% full on a UFS? filesystem before when
you exceed the size of the filesystem.  There is space in the
filesystem for lost+found and all those superblocks? you were
complaining about that can get overwritten if you write too much to a
partition.


Spoace for superblocks and other metadata is subtracted from available
blocks. lost+found is an ordinary directory.



So setting up your dd to actually stop before you overfill the
filesystem is what you need to do. (using bs=# count=# ... info you
can get before you start initializing your file with the df command
without the -k or -h to get number of blocks and block size)

I'm sure the fine people on these lists will correct me if I'm wrong
in my assumptions...  :-)


You are wrong, there;s no such thing as overfilling a filesystem. It's
just the 5% reserved for root. An ordinary user runs out earlier. It's
in the FAQ.



Sorry for the misinformation.  Thanks for the education.

George Morgan



Re: disk geometry issues when trying to set up encrypted partition

2010-06-16 Thread George Morgan

Quoting Harry Palmer tumblew...@fast-mail.org:


Hi there.

I'm fairly new to openbsd and I'm hoping someone with better
understanding than me of how its disk handling works can help.

Beginning my effort to encrypt a 300GB drive in a 64bit Ultrasparc,
I followed these initial steps:

1. used disklabel to create a single slice a on the drive

2. made a file system with newfs (is it necessary to have so many
   backup superblocks?)

3. mounted sd2a on /home/cy and touched it with an empty file
 /home/cy/cryptfile

4. zeroed out the file (and efectively the drive) with
 dd if=/dev/zero of=/home/cy/cryptfile bs=512


Here's the (eventual!) output of (4):

 /home/cy: write failed, file system is full
 dd: /home/cy/cryptfile: No space left on device
 576520353+0 records in
 576520352+0 records out
 295178420224 bytes transferred in 19810.722 secs (14899932 bytes/sec)



Now I have:

 # disklabel sd2a
 # /dev/rsd2a:
 type: SCSI
 disk: SCSI disk
 label: MAW3300NC
 flags: vendor
 bytes/sector: 512
 sectors/track: 930
 tracks/cylinder: 8
 sectors/cylinder: 7440
 cylinders: 13217
 total sectors: 585937500
 rpm: 10025
 interleave: 1
 boundstart: 0
 boundend: 585937500
 drivedata: 0

 16 partitions:
 #size   offset  fstype [fsize bsize  cpg]
   a:5859372000  4.2BSD   2048 163841
   c:5859375000  unused


and:

 # ls -l /home/cy
 total 576661216
 -rw-r--r--  1 root  wheel  295178420224 Jun 16 03:39 cryptfile


and:

 # df -h
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/sd0a 1007M   44.8M912M 5%/
 /dev/sd0k  247G2.0K235G 0%/home
 /dev/sd0d  3.9G6.0K3.7G 0%/tmp
 /dev/sd0f  2.0G559M1.3G29%/usr
 /dev/sd0g 1007M162M795M17%/usr/X11R6
 /dev/sd0h  5.9G212K5.6G 0%/usr/local
 /dev/sd0j  2.0G2.0K1.9G 0%/usr/obj
 /dev/sd0i  2.0G2.0K1.9G 0%/usr/src
 /dev/sd0e  7.9G7.7M7.5G 0%/var
 /dev/sd2a  275G275G  -13.7G   105%/home/cy



I have no understanding of this. I've never seen a df output
that tells me I'm using 13GB more space than the drive is
capable of holding.

I ask here because there's obviously potential for me to lose
data somewhere down the line. I'll be grateful if anyone can
explain where I've gone wrong.


I've seen the greater than 100% full on a UFS? filesystem before when
you exceed the size of the filesystem.  There is space in the
filesystem for lost+found and all those superblocks? you were
complaining about that can get overwritten if you write too much to a
partition.

So setting up your dd to actually stop before you overfill the
filesystem is what you need to do. (using bs=# count=# ... info you
can get before you start initializing your file with the df command
without the -k or -h to get number of blocks and block size)

I'm sure the fine people on these lists will correct me if I'm wrong
in my assumptions...  :-)

George Morgan