pf icmp redirect question

2014-05-30 Thread Marko Cupać
Hi,

let's say for example I have web server on internal network, and I have
redirected tcp port 80 from firewall to it:

pass in on $ext_if inet proto tcp from any to $pub_web port 80 \
   rdr-to $priv_web

Assuming that $pub_web ip address is used exclusively for web server
access, and no other ports are redirected to other internal addresses,
should I also redirect icmp:

pass in on $ext_if inet proto icmp from any to $pub_web rdr-to $priv_web

Thank you in advance,

-- 
Marko Cupać



Re: 5.5 pf priority

2014-05-30 Thread Henning Brauer
* Paco Esteban p...@onna.be [2014-05-29 12:11]:
 On Thu, 29 May 2014, Marko Cupać wrote:
  On Wed, 28 May 2014 21:40:58 +0200
  Henning Brauer lists-open...@bsws.de wrote:
   I'm pretty damn sure I added reset prio if queueing is on thing.
   
   yes, in IF_ENQUEUE - hfsc_enqueue
   m-m_pkthdr.pf.prio = IFQ_MAXPRIO;
  I would like to give priority to certain traffic, for example:
  prio 7: tcp acks
  prio 6: domain
  prio 5: ssh-mgmt, vnc, rdp
  prio 4: web
  prio 3: smtp, imap, pop
  prio 2: ftp, ssh-payload
  prio 1: default/other
  prio 0: p2p
  But I would also like to guarantee minimum bandwidth to low-priority
  traffic (in upper example I would like to avoid ftp coming to a
  grinding halt in moments when higher priority traffic eats up all the
  bandwidth).
  I thought I knew how to achieve this, but now I am not so sure. Is it
  possible with current pf? Any suggestions?
 I'm also interested in this. I tought I was doing it with the example I
 sent but, after Henning's comments ...

let's think it through.
prio has really only a non-neglible effect when you are bandwidth
constrained.
with bandwidth shaping (hfsc underneath), you don't want to overcommit.
thus, you are priorizing by picking what traffic goes to what queue
and what bandwidth setting those have.
mixing in another priorization would have zero (or close to zero)
effect.

so giving you an extra prio button there would probably make feel you
better (like in other implementations), but (also like the others)
have no or close to no effect.

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS. Virtual  Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: pf icmp redirect question

2014-05-30 Thread Sebastian Benoit
Marko Cupa??(marko.cu...@mimar.rs) on 2014.05.30 11:32:14 +0200:
 Hi,
 
 let's say for example I have web server on internal network, and I have
 redirected tcp port 80 from firewall to it:
 
 pass in on $ext_if inet proto tcp from any to $pub_web port 80 \
rdr-to $priv_web

From the wording of your subject, i suspect you somehow think that rdr-to
has something to do with icmp redirects, icmp messages with type 5.

This is not so.

 Assuming that $pub_web ip address is used exclusively for web server
 access, and no other ports are redirected to other internal addresses,
 should I also redirect icmp:
 
 pass in on $ext_if inet proto icmp from any to $pub_web rdr-to $priv_web

No.



encrypted vnd Fwd: CVS: cvs.openbsd.org: src

2014-05-30 Thread Ted Unangst
If you are using encrypted vnd (vnconfig -k or -K) you will want to
begin planning your migration strategy.


-- Forwarded message --
From: Ted Unangst t...@cvs.openbsd.org
Date: Fri 2014/05/30 10:14 -06:00
Subject: CVS: cvs.openbsd.org: src
To: source-chan...@cvs.openbsd.org

CVSROOT:/cvs
Module name:src
Changes by: t...@cvs.openbsd.org2014/05/30 10:14:19

Modified files:
sbin/mount_vnd : mount_vnd.c 

Log message:
WARNING: Encrypted vnd is insecure.
Migrate your data to softraid before 5.7.



Re: encrypted vnd Fwd: CVS: cvs.openbsd.org: src

2014-05-30 Thread Robert
On Fri, 30 May 2014 12:19:35 -0400
Ted Unangst t...@tedunangst.com wrote:
 WARNING: Encrypted vnd is insecure.
 Migrate your data to softraid before 5.7.

Will 5.6 softraid support block sizes other than 512 byte?

marc.info/?l=openbsd-miscm=139524543706370

kind regards,
Robert



Re: 5.5 pf priority

2014-05-30 Thread Giancarlo Razzolini
Em 30-05-2014 08:43, Henning Brauer escreveu:
 * Paco Esteban p...@onna.be [2014-05-29 12:11]:
 On Thu, 29 May 2014, Marko Cupać wrote:
 On Wed, 28 May 2014 21:40:58 +0200
 Henning Brauer lists-open...@bsws.de wrote:
 I'm pretty damn sure I added reset prio if queueing is on thing.

 yes, in IF_ENQUEUE - hfsc_enqueue
 m-m_pkthdr.pf.prio = IFQ_MAXPRIO;
 I would like to give priority to certain traffic, for example:
 prio 7: tcp acks
 prio 6: domain
 prio 5: ssh-mgmt, vnc, rdp
 prio 4: web
 prio 3: smtp, imap, pop
 prio 2: ftp, ssh-payload
 prio 1: default/other
 prio 0: p2p
 But I would also like to guarantee minimum bandwidth to low-priority
 traffic (in upper example I would like to avoid ftp coming to a
 grinding halt in moments when higher priority traffic eats up all the
 bandwidth).
 I thought I knew how to achieve this, but now I am not so sure. Is it
 possible with current pf? Any suggestions?
 I'm also interested in this. I tought I was doing it with the example I
 sent but, after Henning's comments ...
 let's think it through.
 prio has really only a non-neglible effect when you are bandwidth
 constrained.
 with bandwidth shaping (hfsc underneath), you don't want to overcommit.
 thus, you are priorizing by picking what traffic goes to what queue
 and what bandwidth setting those have.
 mixing in another priorization would have zero (or close to zero)
 effect.

 so giving you an extra prio button there would probably make feel you
 better (like in other implementations), but (also like the others)
 have no or close to no effect.

From my experience, if you have an asymmetric link, where your download
rate is bigger than your upload rate, you can see benefits in putting
hfsc in front of it. And, the most benefit seems to be on the upload
side. There are some factors that weigh in, such as router buffers and
network congestion outside of your own network. Speaking of such, I read
recently the Codel spec: https://en.wikipedia.org/wiki/CoDel, I don't
know if it really helps the bufferbloat problem, but this is another
matter entirely, perhaps Henning could explain better, even if it should
or not be put into pf.

Now, when you have a symmetric link with enough bandwidth (10+ MB/s),
which by the way, depending on the technology used, have little or no
buffer at all, them prio will generally do the job, even with p2p
applications. Just don't forget there are always nat involved so you
need to prio packets all the way, just as you should with hfsc. I find
that using tags is the most effective way to do so.

Cheers,

-- 
Giancarlo Razzolini
GPG: 4096R/77B981BC



Re: pf icmp redirect question

2014-05-30 Thread System Administrator
On 30 May 2014 at 13:56, Sebastian Benoit wrote:

 Marko Cupa??(marko.cu...@mimar.rs) on 2014.05.30 11:32:14 +0200:
  Hi,
  
  let's say for example I have web server on internal network, and I
  have redirected tcp port 80 from firewall to it:
  
  pass in on $ext_if inet proto tcp from any to $pub_web port 80 \
 rdr-to $priv_web
 
 From the wording of your subject, i suspect you somehow think that rdr-to
 has something to do with icmp redirects, icmp messages with type 5.
 
 This is not so.

This is correct.

  Assuming that $pub_web ip address is used exclusively for web server
  access, and no other ports are redirected to other internal addresses,
  should I also redirect icmp:
  
  pass in on $ext_if inet proto icmp from any to $pub_web rdr-to
  $priv_web
 
 No.

This is not entirely correct -- you *may* want to have the above 
redirect *if* you want external users to be able to ping the real web 
server to ascertain that it is up, in which case you probably want to 
limit icmp types to echo-request/echo-reply (you certainly do NOT want 
to pass through the icmp redirect or the many other routing controls).



Re: encrypted vnd Fwd: CVS: cvs.openbsd.org: src

2014-05-30 Thread Chris Cappuccio
Robert [info...@die-optimisten.net] wrote:
 On Fri, 30 May 2014 12:19:35 -0400
 Ted Unangst t...@tedunangst.com wrote:
  WARNING: Encrypted vnd is insecure.
  Migrate your data to softraid before 5.7.
 
 Will 5.6 softraid support block sizes other than 512 byte?
 
 marc.info/?l=openbsd-miscm=139524543706370

There are no plans for it right now.



Re: 5.5 pf priority

2014-05-30 Thread Adam Thompson
On 2014-05-30 12:41, Giancarlo Razzolini wrote: 

 From my
experience, if you have an asymmetric link, where your download
 

rate is bigger than your upload rate, you can see benefits in putting

hfsc in front of it. And, the most benefit seems to be on the upload

side. There are some factors that weigh in, such as router buffers and

network congestion outside of your own network. Speaking of such, I
read
 recently the Codel spec: https://en.wikipedia.org/wiki/CoDel, I
don't
 know if it really helps the bufferbloat problem, but this is
another
 matter entirely, perhaps Henning could explain better, even if
it should
 or not be put into pf.
 
 Now, when you have a symmetric
link with enough bandwidth (10+ MB/s),
 which by the way, depending on
the technology used, have little or no
 buffer at all, them prio will
generally do the job, even with p2p
 applications. Just don't forget
there are always nat involved so you
 need to prio packets all the way,
just as you should with hfsc. I find
 that using tags is the most
effective way to do so.
 
 Cheers,

Provably, it's not just the most
benefit from limiting uploads, it's the only benefit. Limiting inbound
traffic is pointless. 

By the time an inbound packet arrives at the
ethernet interface of your pfSense box, it's far too late to bother
policing it. 

The only time QoS actually does anything is when there is
resource contention. By definition, resource contention does not occur
on the receiving end - either you have the horsepower to receive and
process all the packets or you don't; adding extra CPU steps on every
received packet will not magically allow you to receive more data if
your system cannot handle the IRQ load or the bandwidth, or doesn't have
enough mbufs, or is otherwise underpowered. 

Where QoS does its magic
is when there is too little bandwidth (or too few timeslots) to egress a
packet *immediately*. If the interface is idle, a higher-priority packet
will be sent just as fast as the lower-priority packet. 

There are two
ways to influence the behaviour of a downstream device: tagging (whether
DSCP or 802.1p), and rate-limiting. 

If you know the next device in
stream (a DSL modem, say) can only upload at 768Kbit/sec, and you very
carefully only ever send it 750Kbits/sec of traffic, you remain in
control of what packets get sent out first. As soon as you start filling
its buffer (say, by allowing bursts of 10Mbit/sec traffic), the modem is
now in control of what packets to send first, and you typically have no
idea if it's obeying your 802.1p or DSCP markings. 

HFSC does a good
job of rate-limiting (the 2nd case) so that the dumber device never has
to make any decisions of its own. 

In the meantime, please stop
applying rate-limiting on inbound packets - it's pointless. If you have
a resource-constrained LAN or DMZ interface (e.g. 1Mbps WiFi or maybe
Bluetooth PAN, or maybe you have a 100Mbit internet connection but only
a 10Mbit LAN?) then the way to solve that is to apply QoS policies on
the outbound packets as they leave the router and enter the slower
network. 

Generally, QoS classification (i.e. tagging) should happen on
ingress, and policing (i.e. rate-limiting) should happen on egress. 

If
you don't agree with this, please 1) demonstrate that it does make a
difference, and then 2) let's figure out why setting QoS on ingress
makes a difference, because that violates... well... everything. The
theoretical basis for this today is pretty solid; I'm prepared to
believe there are implementation-specific exceptions, but they should
get rooted out and eliminated. 

The only general exception I'm aware of
currently is where an intermediate traffic plane cannot handle all the
ingress traffic flowing over it, in which case QoS more or less consists
of selectively drop on ingress, not rate-limit. This is bad
architecture. Even cheap switches are non-blocking inside the switch
fabric nowadays. However, this is why Cisco still documents QoS
rate-limiting *on ingress* for many of their large L2/L3 switching
platforms... (RED/WRED can be an example of this in some
architectures.)
This exception does not apply to any pf implementations
that I know of. 

The best explanation of this I've seen is in
O'Reilly's Juniper MX Series book, which spends a ridiculous amount of
time (4 chapters, IIRC) explaining how Juniper MX routers implement
queuing theory in hardware. 

I am aware that this message contains a
very shallow treatment of QoS theory, there are numerous edge cases
where complex policies on ingress are warranted; but if you're just
building a pf policy, setting inbound VoIP traffic to a high priority
does NOT magically make your upstream provider send you VoIP packets
with high priority - you don't control their behaviour from your local
pf.conf! 

-Adam Thompson
 athom...@athompso.net 



Re: 5.5 pf priority

2014-05-30 Thread Adam Thompson
My apologies, I have no idea why roundcube decided to format the 
plain-text version of my last message that way.

-Adam



Re: pf icmp redirect question

2014-05-30 Thread André Lucas
On 30 May 2014 19:13, System Administrator ad...@bitwise.net wrote:

 On 30 May 2014 at 13:56, Sebastian Benoit wrote:

  Marko Cupa??(marko.cu...@mimar.rs) on 2014.05.30 11:32:14 +0200: 
 Assuming that $pub_web ip address is used exclusively for web server
   access, and no other ports are redirected to other internal addresses,
   should I also redirect icmp:
  
   pass in on $ext_if inet proto icmp from any to $pub_web rdr-to
   $priv_web
 
  No.

 This is not entirely correct -- you *may* want to have the above
 redirect *if* you want external users to be able to ping the real web
 server to ascertain that it is up, in which case you probably want to
 limit icmp types to echo-request/echo-reply (you certainly do NOT want
 to pass through the icmp redirect or the many other routing controls).


Or if you're concerned about the the ICMP messages related to PMTUd,
they're automatically forwarded as part of the connection state tracking
IIRC.

-André



Re: encrypted vnd Fwd: CVS: cvs.openbsd.org: src

2014-05-30 Thread Robert
On Fri, 30 May 2014 11:14:40 -0700
Chris Cappuccio ch...@nmedia.net wrote:

 Robert [info...@die-optimisten.net] wrote:
  On Fri, 30 May 2014 12:19:35 -0400
  Ted Unangst t...@tedunangst.com wrote:
   WARNING: Encrypted vnd is insecure.
   Migrate your data to softraid before 5.7.
  
  Will 5.6 softraid support block sizes other than 512 byte?
  
  marc.info/?l=openbsd-miscm=139524543706370
 
 There are no plans for it right now.

They way I read the original message (and please correct me if this is wrong!), 
is that something will happen in 5.7 that will disable encrypted vnd.

Which means that people with recent internal/external HDs, that use 4k blocks, 
will have a problem.

(Some disks allow you to use jumper settings for 512b, but not all external 
ones)



Re: encrypted vnd Fwd: CVS: cvs.openbsd.org: src

2014-05-30 Thread Theo de Raadt
  Robert [info...@die-optimisten.net] wrote:
   On Fri, 30 May 2014 12:19:35 -0400
   Ted Unangst t...@tedunangst.com wrote:
WARNING: Encrypted vnd is insecure.
Migrate your data to softraid before 5.7.
   
   Will 5.6 softraid support block sizes other than 512 byte?
   
   marc.info/?l=openbsd-miscm=139524543706370
  
  There are no plans for it right now.
 
 They way I read the original message (and please correct me if this is 
 wrong!), is that something will happen in 5.7 that will disable encrypted vnd.
 
 Which means that people with recent internal/external HDs, that use 4k 
 blocks, will have a problem.
 
 (Some disks allow you to use jumper settings for 512b, but not all external 
 ones)


Wow, don't know where you got that from.  Sometimes it is just a simple
explanation.



Linux Foundation to fund OpenSSL

2014-05-30 Thread AHLSENGIRARD, EDWARD F CTR USAF AFSOC AFSOC A6/A6OK
This just in:

http://www.theinquirer.net/inquirer/news/2347534/linux-foundation-thro\
ws-money-at-openssl-staffing-post-heartbleed



--
Ed Ahlsen-Girard, Contractor (Application Management Services) AFSOC/A6OK
email: edward.ahlsen-girard@hurlburt.af.mil
850-884-2414
DSN: 312-579-2414



Re: 5.5 pf priority

2014-05-30 Thread sven falempin
On Fri, May 30, 2014 at 2:15 PM, Adam Thompson athom...@athompso.net wrote:
 On 2014-05-30 12:41, Giancarlo Razzolini wrote:

 From my
 experience, if you have an asymmetric link, where your download


 rate is bigger than your upload rate, you can see benefits in putting

 hfsc in front of it. And, the most benefit seems to be on the upload

 side. There are some factors that weigh in, such as router buffers and

 network congestion outside of your own network. Speaking of such, I
 read
 recently the Codel spec: https://en.wikipedia.org/wiki/CoDel, I
 don't
 know if it really helps the bufferbloat problem, but this is
 another
 matter entirely, perhaps Henning could explain better, even if
 it should
 or not be put into pf.

 Now, when you have a symmetric
 link with enough bandwidth (10+ MB/s),
 which by the way, depending on
 the technology used, have little or no
 buffer at all, them prio will
 generally do the job, even with p2p
 applications. Just don't forget
 there are always nat involved so you
 need to prio packets all the way,
 just as you should with hfsc. I find
 that using tags is the most
 effective way to do so.

 Cheers,

 Provably, it's not just the most
 benefit from limiting uploads, it's the only benefit. Limiting inbound
 traffic is pointless.

 By the time an inbound packet arrives at the
 ethernet interface of your pfSense box, it's far too late to bother
 policing it.

 The only time QoS actually does anything is when there is
 resource contention. By definition, resource contention does not occur
 on the receiving end - either you have the horsepower to receive and
 process all the packets or you don't; adding extra CPU steps on every
 received packet will not magically allow you to receive more data if
 your system cannot handle the IRQ load or the bandwidth, or doesn't have
 enough mbufs, or is otherwise underpowered.

 Where QoS does its magic
 is when there is too little bandwidth (or too few timeslots) to egress a
 packet *immediately*. If the interface is idle, a higher-priority packet
 will be sent just as fast as the lower-priority packet.

 There are two
 ways to influence the behaviour of a downstream device: tagging (whether
 DSCP or 802.1p), and rate-limiting.

 If you know the next device in
 stream (a DSL modem, say) can only upload at 768Kbit/sec, and you very
 carefully only ever send it 750Kbits/sec of traffic, you remain in
 control of what packets get sent out first. As soon as you start filling
 its buffer (say, by allowing bursts of 10Mbit/sec traffic), the modem is
 now in control of what packets to send first, and you typically have no
 idea if it's obeying your 802.1p or DSCP markings.

 HFSC does a good
 job of rate-limiting (the 2nd case) so that the dumber device never has
 to make any decisions of its own.

 In the meantime, please stop
 applying rate-limiting on inbound packets - it's pointless. If you have
 a resource-constrained LAN or DMZ interface (e.g. 1Mbps WiFi or maybe
 Bluetooth PAN, or maybe you have a 100Mbit internet connection but only
 a 10Mbit LAN?) then the way to solve that is to apply QoS policies on
 the outbound packets as they leave the router and enter the slower
 network.

 Generally, QoS classification (i.e. tagging) should happen on
 ingress, and policing (i.e. rate-limiting) should happen on egress.

 If
 you don't agree with this, please 1) demonstrate that it does make a
 difference, and then 2) let's figure out why setting QoS on ingress
 makes a difference, because that violates... well... everything. The
 theoretical basis for this today is pretty solid; I'm prepared to
 believe there are implementation-specific exceptions, but they should
 get rooted out and eliminated.

 The only general exception I'm aware of
 currently is where an intermediate traffic plane cannot handle all the
 ingress traffic flowing over it, in which case QoS more or less consists
 of selectively drop on ingress, not rate-limit. This is bad
 architecture. Even cheap switches are non-blocking inside the switch
 fabric nowadays. However, this is why Cisco still documents QoS
 rate-limiting *on ingress* for many of their large L2/L3 switching
 platforms... (RED/WRED can be an example of this in some
 architectures.)
 This exception does not apply to any pf implementations
 that I know of.

 The best explanation of this I've seen is in
 O'Reilly's Juniper MX Series book, which spends a ridiculous amount of
 time (4 chapters, IIRC) explaining how Juniper MX routers implement
 queuing theory in hardware.

 I am aware that this message contains a
 very shallow treatment of QoS theory, there are numerous edge cases
 where complex policies on ingress are warranted; but if you're just
 building a pf policy, setting inbound VoIP traffic to a high priority
 does NOT magically make your upstream provider send you VoIP packets
 with high priority - you don't control their behaviour from your local
 pf.conf!

 -Adam Thompson
  athom...@athompso.net



Just 

Re: 5.5 pf priority

2014-05-30 Thread Adam Thompson

On 14-05-30 05:07 PM, sven falempin wrote:
Just curious. Because TCP got flow and congestion control it should be 
possible to reduce the input bandwith of tcp connection even without 
controlling the previous hop ??? 
Yes, but consider a router with 3 interfaces: WAN, LAN1 and LAN2. Let us 
assume WAN is a 100Mbps circuit, LAN1 is a gigabit ethernet connection, 
and LAN2 is only 10Mbps - perhaps it's an 802.11B WiFi card in AP mode, 
or perhaps it's a circuit to a branch office; it doesn't matter except 
that it's noticeably slower.


I will ignore NAT for simplicity; AFAIK all the concepts remain valid 
regardless.


Now, say you want to reserve some portion of bandwidth for SSH (tcp port 
22, to make things easy).  Perhaps you've decided you want to allow up 
to 80Mbps for SSH traffic on the WAN.  (This is a bad policy, and I'll 
now explain why.)

We can easily control packets outbound to WAN; this is the common use case.
Let's say we did the same thing to packets arriving on the WAN 
interface, and that's where we cap SSH at 80Mbps.
Note that this does not prevent the entire 100Mbps pipe filling up with 
SSH packets - although, as you point out, since that is a TCP protocol, 
dropping 20% of the packets will fairly quickly cause that TCP session 
to stop saturating the link... but it can still happen briefly.
What's worse, though, is that that although the WAN is slower than LAN1, 
implying that we can (generally) always egress packets to LAN1 as they 
arrive on the WAN, what do we do with LAN2?  Force-feed 80Mbps onto an 
10Mbps media somehow?  That's impossible.  What happens there is even 
without any policing (rate-limiting), we'll be dropping packets.  Or at 
least we will if we're pushing more than 10Mbps...


If we instead said the policy was 80% of the connection may be used for 
SSH traffic, we would apply rules that apply to packets outbound to 
each interface, and each rule would limit traffic to 80% of that 
interface's bandwidth.  The actual traffic flows seen by the client now 
match our (more flexible) policy.


It is true that in a perfectly symmetric situation, assuming 100% 
utilization, it doesn't matter where you drop packets or where you 
rate-limit flows; nearly the same effect will occur no matter what.


My point, which I realize I've now addressed from three different angles 
without a unifying overview, is that there's no point in limiting on 
ingress: the packets are already there whether you choose to forward 
them or not.
In the case of TCP, dropping packets on ingress will work, but is like 
using a sledgehammer to kill a fly - there are much more subtle ways to 
do it that don't break everything nearby.
In the case of UDP, dropping packets may be completely pointless, 
depending on the protocol, or it might have a similar effect as TCP.


In either case, applying classification on ingress *for every interface* 
and policing on egress *for every interface* will (generally) give you 
the flexibility you need without painting yourself into a corner.


I'm trying to figure out how to formulate my old garden-hose analogy, 
but apparently I've forgotten how to make it sound meaningful - stay tuned.


--
-Adam Thompson
 athom...@athompso.net



Re: hibernate fails to restore on i386

2014-05-30 Thread Mike Larkin
On Thu, May 29, 2014 at 11:51:07PM -0400, Josh Grosse wrote:
 I use ZZZ rarely, so I have no clue when the regression -- if it is a 
 regression -- began.  Clue sticks welcome, as well a guidance for producing
 more useful diagnostics.
 
 Symptom: ZZZ apparently saves and shuts down.  On reboot, it appears to
 unpack the image, then abandon the image and reboot.

I've got a couple of diffs pending for corruption during unpack during resume.
I hope to get those in in the next few days.

 
 
 The DEBUG kernel used here adds -g and HIB_DEBUG to GENERIC.MP:
 
 ---
 
 --- GENERIC.MPFri Dec 26 12:10:45 2008
 +++ DEBUG Thu May 29 23:12:18 2014
 @@ -6,5 +6,7 @@
  include arch/i386/conf/GENERIC
  
  option   MULTIPROCESSOR  # Multiple processor support
 +makeoptions  DEBUG=-g 
 +option   HIB_DEBUG
  
  cpu* at mainbus?
 
 ---
 
 I added -Wno-error to brute force past formatting errors for a DPRINT
 in kern/subr_hibernate.c at line 1011 where %p fails to manage paddr_t.

I fixed the format specifier. Thanks.

 
 This kernel is sufficiently up-to-date to include today's r 1.91 of 
 kern/subr_hibernate.
 
 ---
 
 OpenBSD 5.5-current (DEBUG) #5: Thu May 29 23:30:50 EDT 2014
 j...@netbook.jggimi.homeip.net:/usr/src/sys/arch/i386/compile/DEBUG
 cpu0: Intel(R) Atom(TM) CPU N270 @ 1.60GHz (GenuineIntel 686-class) 1.60 GHz
 cpu0: 
 FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,NXE,SSE3,DTES64,MWAIT,DS-CPL,EST,TM2,SSSE3,xTPR,PDCM,MOVBE,LAHF,PERF
 real mem  = 1064464384 (1015MB)
 avail mem = 1034600448 (986MB)
 mpath0 at root
 scsibus0 at mpath0: 256 targets
 mainbus0 at root
 bios0 at mainbus0: AT/286+ BIOS, date 04/18/11, BIOS32 rev. 0 @ 0xf0010, 
 SMBIOS rev. 2.5 @ 0xf0720 (30 entries)
 bios0: vendor American Megatrends Inc. version 1601 date 04/18/2011
 bios0: ASUSTeK Computer INC. 1005HA
 acpi0 at bios0: rev 0
 acpi0: sleep states S0 S3 S4 S5
 acpi0: tables DSDT FACP APIC MCFG OEMB HPET SSDT
 acpi0: wakeup devices P0P2(S4) P0P1(S4) HDAC(S4) P0P4(S4) P0P8(S4) P0P5(S4) 
 P0P7(S4) P0P9(S4) P0P6(S4)
 acpitimer0 at acpi0: 3579545 Hz, 24 bits
 acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
 cpu0 at mainbus0: apid 0 (boot processor)
 mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
 cpu0: apic clock running at 133MHz
 cpu0: mwait min=64, max=64, C-substates=0.2.2.0.2, IBE
 cpu1 at mainbus0: apid 1 (application processor)
 cpu1: Intel(R) Atom(TM) CPU N270 @ 1.60GHz (GenuineIntel 686-class) 1.60 GHz
 cpu1: 
 FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,NXE,SSE3,DTES64,MWAIT,DS-CPL,EST,TM2,SSSE3,xTPR,PDCM,MOVBE,LAHF,PERF
 ioapic0 at mainbus0: apid 2 pa 0xfec0, version 20, 24 pins
 ioapic0: misconfigured as apic 1, remapped to apid 2
 acpimcfg0 at acpi0 addr 0xe000, bus 0-63
 acpihpet0 at acpi0: 14318179 Hz
 acpiprt0 at acpi0: bus 0 (PCI0)
 acpiprt1 at acpi0: bus 2 (P0P5)
 acpiprt2 at acpi0: bus 1 (P0P7)
 acpiprt3 at acpi0: bus -1 (P0P6)
 acpiec0 at acpi0
 acpicpu0 at acpi0: C2, C1, PSS
 acpicpu1 at acpi0: C2, C1, PSS
 acpitz0 at acpi0: critical temperature is 88 degC
 acpibat0 at acpi0: BAT0 model 1005HA serial   type LION oem ASUS
 acpiac0 at acpi0: AC unit online
 acpiasus0 at acpi0
 acpibtn0 at acpi0: LID_
 acpibtn1 at acpi0: SLPB
 acpibtn2 at acpi0: PWRB
 bios0: ROM list: 0xc/0xec00!
 cpu0: Enhanced SpeedStep 1600 MHz: speeds: 1600, 1333, 1067, 800 MHz
 pci0 at mainbus0 bus 0: configuration mode 1 (bios)
 pchb0 at pci0 dev 0 function 0 Intel 82945GME Host rev 0x03
 vga1 at pci0 dev 2 function 0 Intel 82945GME Video rev 0x03
 intagp0 at vga1
 agp0 at intagp0: aperture at 0xd000, size 0x1000
 inteldrm0 at vga1
 drm0 at inteldrm0
 inteldrm0: 1024x600
 wsdisplay0 at vga1 mux 1: console (std, vt100 emulation)
 wsdisplay0: screen 1-5 added (std, vt100 emulation)
 Intel 82945GM Video rev 0x03 at pci0 dev 2 function 1 not configured
 azalia0 at pci0 dev 27 function 0 Intel 82801GB HD Audio rev 0x02: msi
 azalia0: codecs: Realtek ALC269
 audio0 at azalia0
 ppb0 at pci0 dev 28 function 0 Intel 82801GB PCIE rev 0x02: apic 2 int 16
 pci1 at ppb0 bus 4
 ppb1 at pci0 dev 28 function 1 Intel 82801GB PCIE rev 0x02: apic 2 int 17
 pci2 at ppb1 bus 2
 athn0 at pci2 dev 0 function 0 Atheros AR9285 rev 0x01: apic 2 int 17
 athn0: AR9285 rev 2 (1T1R), ROM rev 13, address 00:25:d3:8a:f6:b4
 ppb2 at pci0 dev 28 function 3 Intel 82801GB PCIE rev 0x02: apic 2 int 19
 pci3 at ppb2 bus 1
 alc0 at pci3 dev 0 function 0 Attansic Technology L2C rev 0xc0: msi, 
 address 90:e6:ba:37:cf:5e
 atphy0 at alc0 phy 0: F1 10/100/1000 PHY, rev. 11
 uhci0 at pci0 dev 29 function 0 Intel 82801GB USB rev 0x02: apic 2 int 23
 uhci1 at pci0 dev 29 function 1 Intel 82801GB USB rev 0x02: apic 2 int 19
 uhci2 at pci0 dev 29 function 2 Intel 82801GB USB rev 0x02: apic 2 int 18
 uhci3 at pci0 dev 29 function 3 Intel 82801GB USB rev 0x02: 

Re: encrypted vnd Fwd: CVS: cvs.openbsd.org: src

2014-05-30 Thread Jonathan Thornburg
In message  http://marc.info/?l=openbsd-miscm=140146687910205w=1,
Ted Unangst wrote:
 If you are using encrypted vnd (vnconfig -k or -K) you will want to
 begin planning your migration strategy.
[[...]]
 WARNING: Encrypted vnd is insecure.
 Migrate your data to softraid before 5.7.

Once this transition happens, what will be the right way to achieve
nested crypto volumes?

That is, with present-day OpenBSD I can have the following:

/home is a softraid-crypto filesystem
managed with 'bioctl -c C' via passphrase #1

/home/me/very-secret is a vnd-crypto filesystem
backed by the files  /home/me/very-secret-storage.{salt,data}
managed with 'vnconfig -c -K' via passphrase #2

/home/me/other-secret is a vnd-crypto filesystem
backed by the files  /home/me/other-secret-storage.{salt,data}
managed with 'vnconfig -c -K' via passphrase #3

What will be the right way to achieve such a nested-encryption setup
once encrypted vnd goes away?  Is/will it be safe (i.e., free from
data corruption, deadlock, or other kernel badness) to nest softraid
crypto volumes?

ciao,

-- 
-- Jonathan Thornburg [remove -animal to reply] 
jth...@astro.indiana-zebra.edu
   Dept of Astronomy  IUCSS, Indiana University, Bloomington, Indiana, USA
   There was of course no way of knowing whether you were being watched
at any given moment.  How often, or on what system, the Thought Police
plugged in on any individual wire was guesswork.  It was even conceivable
that they watched everybody all the time.  -- George Orwell, 1984