Re: dd if=/dev/zero of=/dev/mykeydisk; bioctl -k /dev/mykeydisk ... = will use 0x00 as key, or will generate a secure key?

2015-10-06 Thread Mikael
2015-10-06 19:25 GMT+08:00 Jiri B :

> On Tue, Oct 06, 2015 at 07:17:19PM +0800, Mikael wrote:
> > You
> >
> > 1) Fill your keydisk with zeroes and
> >
> > 2) Apply "bioctl -k" on it.
>
> ^^^ this is not exact cmd arg, is it?
>
> j.
>

No, exact key argument is bioctl -C force -c C -l thedrive -k keydrive
softraid0 , but as I got it that's -k's single intened use anyhow so was
kind of implicit?

2015-10-06 19:27 GMT+08:00 Stefan Sperling :

> Perhaps this will answer your questions:
> http://www.openbsd.org/papers/eurobsdcon2015-softraid-boot.pdf
>

That one mentions nothing of what the keydisk is supposed to contain.

Perhaps that was omitted for brevity?



Re: dd if=/dev/zero of=/dev/mykeydisk; bioctl -k /dev/mykeydisk ... = will use 0x00 as key, or will generate a secure key?

2015-10-06 Thread Stefan Sperling
On Tue, Oct 06, 2015 at 07:17:19PM +0800, Mikael wrote:
> You
> 
> 1) Fill your keydisk with zeroes and
> 
> 2) Apply "bioctl -k" on it.
> 
> Does this mean your key is now zeroes, meaning completely unsafe, or did
> bioctl make a key for you?
> 
> 
> The keydisk gets some "OPENBSDSR KEYDISK005" header but it says nowhere if
> it actually made a key for you.
> 
> If it generates it, then there is no mentioning in the man page of how to
> use one keydisk for multiple volumes. Perhaps that means it doesn't
> generate it afterall?
> 
> Also it says nowhere how big the keydisk needs to be, and if it's any
> benefit of if it's bigger than needed.

Perhaps this will answer your questions:
http://www.openbsd.org/papers/eurobsdcon2015-softraid-boot.pdf



dd if=/dev/zero of=/dev/mykeydisk; bioctl -k /dev/mykeydisk ... = will use 0x00 as key, or will generate a secure key?

2015-10-06 Thread Mikael
You

1) Fill your keydisk with zeroes and

2) Apply "bioctl -k" on it.

Does this mean your key is now zeroes, meaning completely unsafe, or did
bioctl make a key for you?


The keydisk gets some "OPENBSDSR KEYDISK005" header but it says nowhere if
it actually made a key for you.

If it generates it, then there is no mentioning in the man page of how to
use one keydisk for multiple volumes. Perhaps that means it doesn't
generate it afterall?

Also it says nowhere how big the keydisk needs to be, and if it's any
benefit of if it's bigger than needed.



Re: dd if=/dev/zero of=/dev/mykeydisk; bioctl -k /dev/mykeydisk ... = will use 0x00 as key, or will generate a secure key?

2015-10-06 Thread Jiri B
On Tue, Oct 06, 2015 at 07:17:19PM +0800, Mikael wrote:
> You
> 
> 1) Fill your keydisk with zeroes and
> 
> 2) Apply "bioctl -k" on it.

^^^ this is not exact cmd arg, is it?

j.



Re: dd if=/dev/zero of=/dev/mykeydisk; bioctl -k /dev/mykeydisk ... = will use 0x00 as key, or will generate a secure key?

2015-10-06 Thread Mikael
2015-10-06 19:54 GMT+08:00 Stefan Sperling :

> On Tue, Oct 06, 2015 at 07:32:45PM +0800, Mikael wrote:
> > 2015-10-06 19:27 GMT+08:00 Stefan Sperling :
> > > Perhaps this will answer your questions:
> > > http://www.openbsd.org/papers/eurobsdcon2015-softraid-boot.pdf
> > >
> >
> > That one mentions nothing of what the keydisk is supposed to contain.
> >
> > Perhaps that was omitted for brevity?
>
> Indeed, it's not explained in detail.
> The mask key is contained in an optional softraid meta data item.
>
> If no softraid meta date exists (as is the case when you zero the disk)
> fresh meta data with a fresh mask key is written to the key disk slice.
>
> Does that answer your question?
>

Aha. So at "-k" time, if there's no key on the keydisk structure already,
it'll make one. So this is how you can use one and the same keydisk for
multiple volumes.


I guess by "mask key" you mean "stored encryption key" i.e. the whole point
with the keydisk.

Is that one generated by bioctl, or does it just take the bytes that happen
to be at those positions already i.e. zeroes??



Also how big should a keydrive be? No docs say.



Re: dd if=/dev/zero of=/dev/mykeydisk; bioctl -k /dev/mykeydisk ... = will use 0x00 as key, or will generate a secure key?

2015-10-06 Thread Stefan Sperling
On Tue, Oct 06, 2015 at 07:32:45PM +0800, Mikael wrote:
> 2015-10-06 19:27 GMT+08:00 Stefan Sperling :
> > Perhaps this will answer your questions:
> > http://www.openbsd.org/papers/eurobsdcon2015-softraid-boot.pdf
> >
> 
> That one mentions nothing of what the keydisk is supposed to contain.
> 
> Perhaps that was omitted for brevity?

Indeed, it's not explained in detail.
The mask key is contained in an optional softraid meta data item.

If no softraid meta date exists (as is the case when you zero the disk)
fresh meta data with a fresh mask key is written to the key disk slice.

Does that answer your question?



Re: dd if=/dev/zero of=/dev/mykeydisk; bioctl -k /dev/mykeydisk ... = will use 0x00 as key, or will generate a secure key?

2015-10-06 Thread Stefan Sperling
On Tue, Oct 06, 2015 at 08:04:01PM +0800, Mikael wrote:
> Aha. So at "-k" time, if there's no key on the keydisk structure already,
> it'll make one. So this is how you can use one and the same keydisk for
> multiple volumes.

Yes. Per volume you need one disklabel partition of type RAID
which you pass to the -k option to configure it as key disk.

> I guess by "mask key" you mean "stored encryption key" i.e. the whole point
> with the keydisk.

The mask key on the key disk decrypts the actual data encryption key
which is stored (encrypted with the mask key) in the softraid volume.
 
> Is that one generated by bioctl, or does it just take the bytes that happen
> to be at those positions already i.e. zeroes??

Of course the key is generated from entropy.
Do you really expect us to consider the contents of left-over disk blocks
cryptographically secure?

> Also how big should a keydrive be? No docs say.

That was definitely in my slides, look again ;-)
But I admit that slides don't count as docs.



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-22 Thread Raimundo Santos
On 21 July 2014 18:17, Giancarlo Razzolini grazzol...@gmail.com wrote:

 I've noticed
 similar performance and, in some cases, better than vio(4) when using
 the host's pci passthrough and assigning a real hardware to the VM. But

Hello Giancarlo,

thank you for your time.

I am at a very bleeding edge (or awkward) project of putting almost all
machines of a little WISP into a virtualized system.

My concern mainly touches packets and bits flows, storage is not one.
XenServer has very nice facilities, but is a pain to tailor it in network
area (well, almost in all areas: lots of long commands which are hard to
remember, tricks that could vanish with updates, ...). The amount of work
to tune it is equal or more than to use libvirt, so I am dropping it.

Ubuntu Server 14.04 came out with qemu-kvm 2.0.0, with newer host VirtIO
implementations in many areas. I am on my way to test it. I dislike Ubuntu
as a Server, but I am not in that project to take much pain to manage the
hosts, compile that sadly GNU-crafted things and so on, therefore if Ubuntu
give me good performance, I will take it.

Can you tell me where are you using qemu-kvm 2.0.0 and how you manage it
(upgrades, etc.)?

 you shouldn't expected very great performance between VM's hosted in the
 same host, unless you're using linux's macvtap with a switch that
 supports VEPA. Using bridge is slow. I suggest you create a virtual
 network and assign an interface for each of your VM's that need
 communicating, and also use vio(4) on the guest OS.

As you stated before, I expect a lot more performance from PCI passthrough,
and things like clients bandwidth enforcement will depend on it. I will try
as match as possible to let that main traffic outside host internal
networks.

Have you played with Open vSwitch as a bridging facility?

My client (the WISP) is very excited about turning off that old machines,
but, while I am enjoying the challenge, am I too with three foot behind the
line of excitement when the subject are reliability and scalability of the
solution. Nonetheless, it is an experimental.

And someone could think: why OpenBSD? Well, have you ever tried setting
RIPv2 in other OSes? The more general answer: it Just Works for almost all
things I need to setup. The only thing that I can not figure out how to do
is the WISP's clients contracted bandwidth enforcement.

Cheers,
Raimundo Santos



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-22 Thread Giancarlo Razzolini
Em 22-07-2014 19:20, Raimundo Santos escreveu:
 XenServer has very nice facilities, but is a pain to tailor it in
 network area (well, almost in all areas: lots of long commands which
 are hard to remember, tricks that could vanish with updates, ...). The
 amount of work to tune it is equal or more than to use libvirt, so I
 am dropping it.
Libvirt has support for Xen. I just found it easier to use KVM because
it is not invasive to the guest os as Xen is. The virtio, virt mem
balloon and others just increase performance if the guest os has them.
But it does not impede it's use. Xen is very well suited for running
many machines of the same kind and with the same io/throughput needs. I
find KVM is more well suited for heterogeneous environments.
 Ubuntu Server 14.04 came out with qemu-kvm 2.0.0, with newer host
 VirtIO implementations in many areas. I am on my way to test it. I
 dislike Ubuntu as a Server, but I am not in that project to take much
 pain to manage the hosts, compile that sadly GNU-crafted things and so
 on, therefore if Ubuntu give me good performance, I will take it.
I use ubuntu as server for many years now. Of course you can get bit in
the hand every now and then (specially when upgrading from release to
release), but if you use some kind of version control for your
configuration files, the impact is minimal. It has almost the perfect
balance between stability and bleeding edge.
 Can you tell me where are you using qemu-kvm 2.0.0 and how you manage
 it (upgrades, etc.)?
Yes, I'm using 2.0.0. I manage it using the virt-manager. Since I don't
use more advanced features such as host migration, failover, etc, I find
it the perfect tool. Every now and then I need to edit the qemu xml
machine files by hand, but it's well documented. I upgrade the host
system as often as I can (I don't trust unattended upgrades, even well
configured). And for the guests, in the case of OpenBSD, I use the mtier
stable packages and kernel updates.
 Have you played with Open vSwitch as a bridging facility?
I didn't had the chance yet.
 And someone could think: why OpenBSD? Well, have you ever tried
 setting RIPv2 in other OSes?
Yes. I've played with it on other OSes. On linux quagga does the job
reasonably. But please, try OSPF, if possible, on your network. RIP
should really R.I.P.
 The more general answer: it Just Works for almost all things I need to
 setup. The only thing that I can not figure out how to do is the
 WISP's clients contracted bandwidth enforcement.
Well, pf has a very good bandwidth scheduler and some changes on 5.5
improved and simplified the system a lot. And you can use pflow(4) with
nfsen to provide some kind accounting, if you don't want to implement a
radius system.

Cheers,

--
Giancarlo Razzolini
GPG: 4096R/77B981BC

[demime 1.01d removed an attachment of type application/pkcs7-signature which 
had a name of smime.p7s]



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-21 Thread Giancarlo Razzolini
Em 20-07-2014 19:44, Adam Thompson escreveu:
 FWIW, you're almost certainly going to be CPU-bound.  I can't get more
 than ~200Mbps on an emulated em(4) interface under ProxmoxVE (KVM
 1.7.1) between two VMs running on the same host.  Granted, the CPUs
 are slowish (2.2GHz Xeon L5520).  I get better throughput using vio(4)
 but then I have to reboot the VMs once every 2 or 3 days to prevent
 them from locking up hard.
Adam,

I've been using vio(4) for quite some time now, with long uptimes in
my vm machines, and never experienced lock ups. I've been using since
5.4. Now I'm running qemu-kvm 2.0.0. Now, to the OP question, I've been
using a mix of tcpbench and iperf and also been using statistical data
from libvirt, to measure the performance of my VM's. I've noticed
similar performance and, in some cases, better than vio(4) when using
the host's pci passthrough and assigning a real hardware to the VM. But
you shouldn't expected very great performance between VM's hosted in the
same host, unless you're using linux's macvtap with a switch that
supports VEPA. Using bridge is slow. I suggest you create a virtual
network and assign an interface for each of your VM's that need
communicating, and also use vio(4) on the guest OS.

Cheers,

--
Giancarlo Razzolini
GPG: 4096R/77B981BC

[demime 1.01d removed an attachment of type application/pkcs7-signature which 
had a name of smime.p7s]



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-20 Thread Raimundo Santos
On 19 July 2014 21:22, Sean Kamath kam...@moltingpenguin.com wrote:

 Are you counting all those zeros to make sure they all came through?

 'cause TCP is guaranteed delivery, in order.  UDP guarantees nothing.

Hello Sean!

Why counting?

My guess, and therefore the start of my reasoning and later questioning
here, is that all those zeroes inside and UDP could flood the virtual
network structure.

May be you are confusing nc(1) with wc(1).



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-20 Thread Raimundo Santos
On 19 July 2014 21:28, Philip Guenther guent...@gmail.com wrote:

  tcpbench(1) - TCP/UDP benchmarking and measurement tool

Oh, just beneath my eyes, in the base install. Thank you, Philip.

May I loose time comparing tcpbench(1) with iperf?



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-20 Thread Adam Thompson

On 14-07-20 04:57 PM, Raimundo Santos wrote:

On 19 July 2014 21:22, Sean Kamath kam...@moltingpenguin.com wrote:

Are you counting all those zeros to make sure they all came through?

'cause TCP is guaranteed delivery, in order.  UDP guarantees nothing.

Hello Sean!

Why counting?

My guess, and therefore the start of my reasoning and later questioning
here, is that all those zeroes inside and UDP could flood the virtual
network structure.

May be you are confusing nc(1) with wc(1).


No, what he meant was that using nc -u can produce false results.
The sender can send as many packets as its CPU can possibly send, even 
if 99.9% of those packets are getting dropped by the receiver; the 
sender still thinks it successfully send a bazillion bytes per second 
even though it's a meaningless number.

t
I didn't know tcpbench(1) was in base, either... I always install and 
use iperf.
I would expect both tcpbench and iperf to return very similar results.  
(Note that the results probably won't be perfectly identical, that's 
normal.)


Using the -u flag to tcpbench(1) over a ~20Mbps radio link, the client 
reports throghput of 181Mbps, which is impossible.  The server reports, 
simultaneously, 26Mbps.  Both of these cannot be simultaneously true, 
right?  Except they can - the client really is sending 181Mbps of 
traffic, the server really is receiving 26Mbps of traffic.  What 
happened to the other 155Mbps of traffic?  Dropped on the floor, 
probably by the radio.
That's why you should run TCP benchmarks, or else be very careful with 
UDP benchmarks...  Remember, too, that any out-of-order packets will 
kill real-world performance, and UDP has no guarantees about those, either.


FWIW, you're almost certainly going to be CPU-bound.  I can't get more 
than ~200Mbps on an emulated em(4) interface under ProxmoxVE (KVM 1.7.1) 
between two VMs running on the same host.  Granted, the CPUs are slowish 
(2.2GHz Xeon L5520).  I get better throughput using vio(4) but then I 
have to reboot the VMs once every 2 or 3 days to prevent them from 
locking up hard.


Previous testing with VMware produced similar results circa OpenBSD 
v5.0.  Some other guests were able to get ~2Gbps on the same VM stack, 
at the time.
It is - almost by definition - impossible to flood the virtual network 
infrastructure without running out of CPU cycles in the guests first.  
It might be possible if the vSwitch is single-threaded, and you're 
running on a many-core CPU with each VM pegging its core(s) doing 
network I/O... but even then, remember that the vSwitch doesn't have to 
do any layer-3 processing, so it has much less work to do than the guest 
network stack.


Where you do have to worry about running out of bandwidth is the switch 
handling traffic between your hypervisors, or more realistically, the 
network interface(s) leading from your host to said switch.
Load-balancing (VMware ESXi  v5.1) and LAGs (everywhere/everything 
else) are your friends here, unless you have the budget to install 
10Gbps switches...


--
-Adam Thompson
 athom...@athompso.net



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-20 Thread Raimundo Santos
On 20 July 2014 19:44, Adam Thompson athom...@athompso.net wrote:

 No, what he meant was that using nc -u can produce false results.

Thank you Adam to point out my misinterpretation. Now I understand that
Sean asked about how am I sure that all those zeroes generated in one host
are really going to the other.

 The sender can send as many packets as its CPU can possibly send, even if
99.9% of those packets are getting dropped by the receiver; the sender
still thinks it successfully send a bazillion bytes per second even
though it's a meaningless number.

Good point, as this:

 FWIW, you're almost certainly going to be CPU-bound.  I can't get more
than ~200Mbps on an emulated em(4) interface under ProxmoxVE (KVM 1.7.1)
between two VMs running on the same host.  Granted, the CPUs are slowish
(2.2GHz Xeon L5520).  I get better throughput using vio(4) but then I have
to reboot the VMs once every 2 or 3 days to prevent them from locking up
hard.


What version of ProxmoxVE? I am considering this as a counterpart to
XenServer, but I have some kind of faith in hypervisors in Xen and VMWare
style, but in this project I can not afford VMWare prices.

Thank you again, Adam!



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-20 Thread Adam Thompson

On Sun 20 Jul 2014 09:58:03 PM CDT, Raimundo Santos wrote:

What version of ProxmoxVE? I am considering this as a counterpart to
XenServer, but I have some kind of faith in hypervisors in Xen and VMWare
style, but in this project I can not afford VMWare prices.


I'm paying for the basic level of PVE subscription right now, which 
isn't very much (€30/host/year or something like that).
I'm running their version of -CURRENT, that is to say, the testing 
repository.  It hasn't burned me very many times so far :-/.
The $/year entitles me to run the Enterprise version, and 
automatically get updates.  That's a sensible thing to do when running 
in production.  I'm not really in commercial production yet.


OpenBSD runs... well, acceptably, I guess.  If you use CEPH, make sure 
you mount your filesystems sync and set cache to writeback or you 
WILL encounter filesystem corruption each and every time your VM 
doesn't shut down cleanly.  In my system, that reduces disk write 
performance to ~0.5MBytes/sec.  I just remount async whenever I'm doing 
a large amount of disk I/O, then immediately remount sync when I'm 
done.  And that's using the virtio driver!


Running PVE (as opposed to VMware ESXi, say) makes crystal clear why 
Theo has misgivings about hypervisors in general...  PVE exposes a lot 
more of the underlying OS than I'd like, or perhaps I should say I have 
to pay a lot more attention to the underlying OS than I'd like.
Hardware zone/partition support like some of the high-end sun4v(??) 
machines looks better and better all the time.  (Not too sure of the 
terminology right now.)



--
-Adam Thompson
athom...@athompso.net



Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-19 Thread Raimundo Santos
Hello all!

I am testing OpenBSD 5.5 Release over XenServer 6.2 with HVM and qemu-dm
wrapper to change the default r8139 to virtio, adapted from [1].

So, to test the server private network throughput and other things related,
I am using netcat. In this fashion:

nc -lu 9000  /dev/zero  /dev/null

nc -u 192.168.1.10 9000  /dev/zero  /dev/null

Despite of pings showing 18ms of average time, it reached near 1Gbps of
cross traffic (600Mbps in to and 300Mbps out from virtual router, at
average) in the following configuration:

. two virtual networks (int0 and int1 - internal networks)
. one router between them
. two vms for each network

In int0, vms are servers (nc -l, as described before). In int1, vms are
clients. Of course, there are no such terms when the connection starts,
both ends are server and client at same time.

Trying to start the same netcat idea, but in TCP mode, it only generate a
few Mbps (mostly seem: 10Mbps of cross traffic, 5 in and 5 out) for each
client/server. What could it be? No clues here, as a similar test with em
on bare metal gave few Mbits less than UDP.

And the main question: are this a good method to stress the virtual
structure, or there are other good methods?

Thank you for your time,
Raimundo Santos


[1] http://marc.info/?l=openbsd-miscm=135336071024634w=2



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-19 Thread Sean Kamath
On Jul 19, 2014, at 11:51 AM, Raimundo Santos rait...@gmail.com wrote:

 Hello all!
 
 I am testing OpenBSD 5.5 Release over XenServer 6.2 with HVM and qemu-dm
 wrapper to change the default r8139 to virtio, adapted from [1].
 
 So, to test the server private network throughput and other things related,
 I am using netcat. In this fashion:
 
 nc -lu 9000  /dev/zero  /dev/null
 
 nc -u 192.168.1.10 9000  /dev/zero  /dev/null

Are you counting all those zeros to make sure they all came through?

'cause TCP is guaranteed delivery, in order.  UDP guarantees nothing.

Sean


 Despite of pings showing 18ms of average time, it reached near 1Gbps of
 cross traffic (600Mbps in to and 300Mbps out from virtual router, at
 average) in the following configuration:
 
 . two virtual networks (int0 and int1 - internal networks)
 . one router between them
 . two vms for each network
 
 In int0, vms are servers (nc -l, as described before). In int1, vms are
 clients. Of course, there are no such terms when the connection starts,
 both ends are server and client at same time.
 
 Trying to start the same netcat idea, but in TCP mode, it only generate a
 few Mbps (mostly seem: 10Mbps of cross traffic, 5 in and 5 out) for each
 client/server. What could it be? No clues here, as a similar test with em
 on bare metal gave few Mbits less than UDP.

 
 And the main question: are this a good method to stress the virtual
 structure, or there are other good methods?
 
 Thank you for your time,
 Raimundo Santos
 
 
 [1] http://marc.info/?l=openbsd-miscm=135336071024634w=2



Re: Are nc -lu /dev/zero /dev/null a good throughput test?

2014-07-19 Thread Philip Guenther
On Sat, Jul 19, 2014 at 11:51 AM, Raimundo Santos rait...@gmail.com wrote:

 I am testing OpenBSD 5.5 Release over XenServer 6.2 with HVM and qemu-dm
 wrapper to change the default r8139 to virtio, adapted from [1].

 So, to test the server private network throughput and other things related,
 I am using netcat.

...

 And the main question: are this a good method to stress the virtual
 structure, or there are other good methods?



 tcpbench(1) - TCP/UDP benchmarking and measurement tool



/dev/zero

2009-06-16 Thread Rafal Brodewicz
Hello.

Where is kernel code for /dev/zero and /dev/null placed in?
I mean in witch file(s).

Thanks.  
-- 
Rafal Brodewicz



Re: /dev/zero

2009-06-16 Thread Ted Unangst
2009/6/16 Rafal Brodewicz b...@brodewicz.pl:
 Hello.

 Where is kernel code for /dev/zero and /dev/null placed in?

arch/arch/arch/mem.c