Re: UFS in CF for /boot and Hammer for the rest and failover+load balancing

2011-10-04 Thread Alex Hornung
Regarding disk encryption, there are basically two approaches now.

One is tcplay(8), which is a TrueCrypt compatible BSD-licensed solution,
but fairly experimental.

The more stable/well tested solution is the GPL-licensed cryptsetup(8),
which is the same as on Linux.

Both use dm_target_crypt underneath and both are fully supported for
cryptdisks(8), crypttab(5) and mkinitrd(8).

Using the initrd approach you can encrypt your / with any of the above
approaches, but /boot needs to be unencrypted.

HTH,
Alex Hornung

On 03/10/11 08:15, Zenny wrote:
 Thank you Justin for a comprehensive reply. Appreciate it!
 
 I shall check with the vkernel stuffs to limit resources for jails 
 (seems like a sharp learning curve ;-) ) 
 
 Since you stated that 4 drives are overkill, does hammer allow 
 create a pool like in ZFS of two master drives and two slave drives
 with remote machine which works exactly as a failover + load
 balancing (as in the case of DRBD in Linux or HAST in the coming
 FreeBSD-9).
 
 Where exactly can find the detailed document for scripting for a 
 streaming the HAMMER data to the remote machines?
 
 As I stated earlier, I want:
 
 /boot in CF or SanDisk
 / in HDD and other data
 swapcahce in SSD or in HAMMER /
 
 in order to separate data from the operating system. But I could
 not find documents for manual installation mode to meet my
 requirements. Let me know if there are any. Thanks!
 
 On Sun, Oct 2, 2011 at 11:50 PM, Justin Sherrill
 jus...@shiningsilence.com mailto:jus...@shiningsilence.com wrote:
 
 I'm not sure about the jails.  They I think work the same on
 DragonFly, though the resource limits aren't there.  You could
 potentially use virtual kernels to get a similar effect.  See the
 vkernel man page for that.
 
 You should be able to set up the root and other volumes normally.  4
 hard drives may be overkill - you can stream from master to slave
 volumes in Hammer, for which 2 drives will work.  If you want more
 duplication, hardware RAID may be a good idea; people have been trying
 out Areca cards with success recently.
 
 AES256 is supported, or at least I see the tcplay(8) man page has an
 example using it.  I haven't used disk encryption enough to know it
 well.
 
 You can use Hammer to stream data to other machines, and then in the
 event of something going wrong, promote the slave drive in the
 surviving unit to master.  This would require some scripting or manual
 intervention; this isn't covered with an automatic mechanism.
 
 On Sun, Oct 2, 2011 at 5:50 AM, Zenny garbytr...@gmail.com
 mailto:garbytr...@gmail.com wrote:
  Hi:
 
  I am pretty new to Dragonfly or BSD world. HammerFS seems to be very
  innovative. Thanks to Matt and team for their hard work.
 
  I would like to do something with Hammer+UFS like the following,
  inspired by Paul's work
  (http://www.psconsult.nl/talks/NLLGG-BSDdag-Servers/), but could not
  figure out exactly:
 
  1) Creation of a server with a jail with minimal downtime as offered
  by nanobsd scripts in FreeBSD. Two failover kernels. Is there such
  scripts for DragonflyBSD?
 
  2) I want to have the minimal boot (ro UFS) and configurations like
  that of the nanobsd image on a compact flash while the entire root and
  data in an array of HDDs (at least 4) with of course an SSD for
  swapcache. The latter could be Hammer to avoid softraid.
 
  3) All HDDs should be encrypted with AES256 (I could not find whether
  DragonflyBSD supports that), and accessible either in the /boot of CF
  or somewhere else (could be ssh tunneled from another network).
 
  4) I could not figure out the features of jail available for
  DragonflyBSD. FreeBSD-9-CURRENT has the resource containers
  (http://wiki.freebsd.org/Hierarchical_Resource_Limits). Are they
  applicable in DragonflyBSD's case.
 
  5) Is there any way that the two similar servers in two different
  locations can securely mirror for failover as well as load-balancing?
 
  Appreciate your thoughtful inputs! Apology in advance if my post above
  appears to be pretty naive. Thanks in advance to the entire DF
  community and developers!
 
  zenny
 
 
 


DRM/GEM how to proceed

2011-10-04 Thread Johannes Hofmann
Hi,

I happen to have a Core i5 laptop with integrated Intel graphics.
Unfortunately recent versions of xf86-video-intel need kernel support
to work. Most notably the GEM interface needs to be implemented in the
kernel.
I looked around and found the following work already done in the BSDs:

http://www.dragonflybsd.org/docs/developer/GEMdrmKMS/
the gsocdrm_34_i915 branch of
git://leaf.dragonflybsd.org/~davshao/dragonfly.git
compiles for me, but it to me it seems that the GEM interface was not
yet ported / tested - but I could be wrong.

http://mail-index.netbsd.org/current-users/2011/06/08/msg016843.html
A NetBSD port of an OpenBSD implementation which seems to be working
already.

http://wiki.freebsd.org/Intel_GPU
FreeBSD implementation founded by the the FreeBSD foundation. The
patch is pretty big and seems to include changes to the vm layer.

So my question is where DragonFlyBSD is heading with regard to DRM. Is
davshao's work being maintained and continued?
Or should I rather try to port a working implementation from
another BSD?
Any hints are welcome.

Regards,
Johannes


Re: DRM/GEM how to proceed

2011-10-04 Thread Hasso Tepper
On 04.10.11 22:22, Johannes Hofmann wrote:
 Hi,
 
 I happen to have a Core i5 laptop with integrated Intel graphics.
 Unfortunately recent versions of xf86-video-intel need kernel support
 to work. Most notably the GEM interface needs to be implemented in the
 kernel.

It's not only about GEM, but KMS as well. As far as I know only FreeBSD
folks have (somewhat) working KMS code. OpenBSD has only GEM implemented
and is maintaining de facto a fork of the intel driver because of that.
But modifying the intel driver can't be avoided anyway.

I'd recommend to go to the FreeBSD route. We share the DRM code which
should make a lot of things easier, especially in KMS land.


-- 
Hasso