Hi Siju

I tried 2 ISO files

*dfly-i386-2.6.3_REL.iso*  and * DragonFly-i386-LATEST-ISO.iso*


Both failed in VirtualBox. I think in VMware they will work. Earlier when i
tried OpenBSD in VirtualBox , it failed . But later I 'm able to run openbsd
properly in VMWare.




On 13 October 2010 18:05, Basil Kurian <[email protected]> wrote:

> When i tried to run DragonflyBSD on VirtualBox the installation failed 2
> times :( . Let me try it on VMWare. Actually I'm interested in BSD s in
> server side only, not in desktop side.
>
>
> On 13 October 2010 18:04, Basil Kurian <[email protected]> wrote:
>
>> Wow , a lot of interesting features , but the design is too complex when
>> compared to other BSD s . Isn't it ? Especially the partition naming .... It
>> requires a lot of time for people like me to understand those concepts ...
>> thanks a lot for your effort :) Is there any published book on
>> DragonFlyBSD... i saw the handbook , it is very nice .
>>
>>
>> On 13 October 2010 17:49, Siju George <[email protected]> wrote:
>>
>>> On Wed, Oct 13, 2010 at 3:19 PM, Basil Kurian <[email protected]>
>>> wrote:
>>> > Hi Siju ,
>>> >
>>> > Your reply is so compelling to give DragonflyBSD a try :) Once i
>>> installed
>>> > it. Even though i have a little experience with FreeBSD. i was not able
>>> to
>>> > manage it. Let me try once more :). It would be nice if you can share
>>> your
>>> > blog or something which contains a few howto s .. So that i can start
>>> with
>>> > that ... I 'm too excited to try snapshots and  replication feature in
>>> > Hammer :)
>>> >
>>>
>>> If you have experience with FreeBSD then installing this should be quite
>>> simple.
>>> But since you are new I would ask you to try the i386 port if you are
>>> using it as a desktop.
>>>
>>> you can get the cd/usb images from
>>>
>>> http://mirror-master.dragonflybsd.org/snapshots/i386/
>>>
>>> Installing is simple.
>>> Just follow through the installer and choose HAMMER when you are asked
>>> for the file systems.
>>> Then the installer will automatically create the /usr /var .... etc
>>> PFses and install everything.
>>> You have an option to configure "ip details" / "hostname"  etc.
>>>
>>> I will tell you some thing about a PFS.
>>>
>>> A hammer Filesystem does a lot more than LVM.
>>>
>>> If you followed the default options during installation you will be
>>> left with a system with the following disk configuration
>>>
>>> bash-4.1$ df -h
>>> Filesystem                Size   Used  Avail Capacity  Mounted on
>>> ROOT                      288G    12G   276G     4%    /
>>> devfs                     1.0K   1.0K     0B   100%    /dev
>>> /dev/serno/9VMBWDM1.s1a   756M   138M   558M    20%    /boot
>>> /pfs/@@-1:00001           288G    12G   276G     4%    /var
>>> /pfs/@@-1:00002           288G    12G   276G     4%    /tmp
>>> /pfs/@@-1:00003           288G    12G   276G     4%    /usr
>>> /pfs/@@-1:00004           288G    12G   276G     4%    /home
>>> /pfs/@@-1:00005           288G    12G   276G     4%    /usr/obj
>>> /pfs/@@-1:00006           288G    12G   276G     4%    /var/crash
>>> /pfs/@@-1:00007           288G    12G   276G     4%    /var/tmp
>>> procfs                    4.0K   4.0K     0B   100%    /proc
>>>
>>> In this example
>>>
>>> /dev/serno/9VMBWDM1 is my hard disk secified with UUID
>>>
>>> /dev/serno/9VMBWDM1.s1 is the first slice on the hard disk.
>>>
>>> Let us see its disklabel
>>>
>>> bash-4.1$ sudo disklabel /dev/serno/9VMBWDM1.s1
>>> # /dev/serno/9VMBWDM1.s1:
>>> #
>>> # Informational fields calculated from the above
>>> # All byte equivalent offsets must be aligned
>>> #
>>> # boot space:    1044992 bytes
>>> # data space:  312567643 blocks # 305241.84 MB (320069266944 bytes)
>>> #
>>> # NOTE: If the partition data base looks odd it may be
>>> #       physically aligned instead of slice-aligned
>>> #
>>> diskid: e67030af-d2af-11df-b588-01138fad54f5
>>> label:
>>> boot2 data base:      0x000000001000
>>> partitions data base: 0x000000100200
>>> partitions data stop: 0x004a85ad7000
>>> backup label:         0x004a85ad7000
>>> total size:           0x004a85ad8200    # 305242.84 MB
>>> alignment: 4096
>>> display block size: 1024        # for partition display only
>>>
>>> 16 partitions:
>>> #          size     offset    fstype   fsuuid
>>>  a:     786432          0    4.2BSD    #     768.000MB
>>>  b:    8388608     786432      swap    #    8192.000MB
>>>  d:  303392600    9175040    HAMMER    #  296281.836MB
>>>  a-stor_uuid: eb1c8aac-d2af-11df-b588-01138fad54f5
>>>  b-stor_uuid: eb1c8aec-d2af-11df-b588-01138fad54f5
>>>  d-stor_uuid: eb1c8b21-d2af-11df-b588-01138fad54f5
>>>
>>> The slice 1 has 3 partions
>>>
>>> a - for /boot
>>> b - for swap
>>>
>>> c- is usually for the whole hard disk in BSDs but it is not shown here
>>>
>>> d - the hammer File system labelled ROOT
>>>
>>> Just like when you create a volume group in LVM you give it a name
>>> "vg0" etc when you create a hammer file system you give it a label,
>>> here the Installed labelled it as "ROOT" and mounted it as
>>>
>>>
>>> ROOT                      288G    12G   276G     4%    /
>>>
>>> Now a PFS is a Pseudo hammer File System inside a hammer file system.
>>> The hammer file system in which the PFSes are created is referred to
>>> as the root file system.
>>> ( Don't confuse the "root" file system with the Label "ROOT" the label
>>> can be anything it is just that the installed Labelled it as ROOT
>>> because it is mounted as / )
>>>
>>> Now Inside the ROOT hammer file system you find the installed created
>>> 7 PFses let us see how they are mounted in fstab
>>>
>>> bash-4.1$ cat /etc/fstab
>>> # Device                Mountpoint      FStype  Options         Dump
>>>  Pass#
>>> /dev/serno/9VMBWDM1.s1a         /boot           ufs     rw
>>>  1       1
>>> /dev/serno/9VMBWDM1.s1b         none            swap    sw
>>>  0       0
>>> /dev/serno/9VMBWDM1.s1d         /               hammer  rw
>>>  1       1
>>> /pfs/var                /var            null    rw              0       0
>>> /pfs/tmp                /tmp            null    rw              0       0
>>> /pfs/usr                /usr            null    rw              0       0
>>> /pfs/home               /home           null    rw              0       0
>>> /pfs/usr.obj    /usr/obj                null    rw              0       0
>>> /pfs/var.crash  /var/crash              null    rw              0       0
>>> /pfs/var.tmp    /var/tmp                null    rw              0       0
>>> proc                    /proc           procfs  rw              0       0
>>>
>>>
>>> The PFses are mounted using a NULL mount because they are also hammer
>>> filesystems.
>>> You can read more on null mounts here
>>>
>>> http://leaf.dragonflybsd.org/cgi/web-man?command=mount_null&section=ANY
>>>
>>> You don't need to specify a Size for the PFSes like you do for Logical
>>> Volumes inside a Volume Group for LVM.
>>> All the Free space in the mother file system is available for all the
>>> PFses to grow.
>>> That is the reason in the df output above you saw free space is same
>>> for all PFses and the root hammer filesystem :-)
>>>
>>> Now if you look in /var
>>>
>>> bash-4.1$ cd /var/
>>> bash-4.1$ ls
>>> account         backups         caps            cron            empty
>>>         isos            log             msgs            run
>>>  spool           yp
>>> at              cache           crash           db              games
>>>         lib             mail            preserve        rwho
>>>  tmp
>>>
>>> you will find the above directories.
>>>
>>> If you look at the status of one of the pfses say /usr you will see
>>> /var/hammer is the default snapshot directory.
>>>
>>> bash-4.1$ hammer pfs-status /usr/
>>> /usr/   PFS #3 {
>>>    sync-beg-tid=0x0000000000000001
>>>    sync-end-tid=0x0000000117ac6270
>>>    shared-uuid=f33e318e-d2af-11df-b588-01138fad54f5
>>>    unique-uuid=f33e31cb-d2af-11df-b588-01138fad54f5
>>>    label=""
>>>    prune-min=00:00:00
>>>    operating as a MASTER
>>>    snapshots directory defaults to /var/hammer/<pfs>
>>> }
>>>
>>> But there is no "hammer" directory in /var now.
>>>
>>> That is because no snapshots are yet taken.
>>>
>>> You can verify this by checking the snapshots available for /usr
>>>
>>> bash-4.1$ hammer snapls /usr
>>> Snapshots on /usr       PFS #3
>>> Transaction ID          Timestamp               Note
>>> bash-4.1$
>>>
>>> The best way to tak a snapshot is to run the command 'hammer cleanup'
>>> it does a lot of things but the fist thing it does is to take the
>>> snapshots of all mounted pfses. Let us try that :-)
>>>
>>> bash-4.1$ sudo hammer cleanup
>>> cleanup /                    - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/root
>>>  handle PFS #0 using /var/hammer/root
>>>           snapshots - run
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /var                 - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/var
>>>  handle PFS #1 using /var/hammer/var
>>>           snapshots - run
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /tmp                 - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/tmp
>>>  handle PFS #2 using /var/hammer/tmp
>>>           snapshots - disabled
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /usr                 - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/usr
>>>  handle PFS #3 using /var/hammer/usr
>>>           snapshots - run
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /home                - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/home
>>>  handle PFS #4 using /var/hammer/home
>>>           snapshots - run
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /usr/obj             - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/usr/obj
>>>  handle PFS #5 using /var/hammer/usr/obj
>>>           snapshots - disabled
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /var/crash           - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/var/crash
>>>  handle PFS #6 using /var/hammer/var/crash
>>>           snapshots - run
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /var/tmp             - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/var/tmp
>>>  handle PFS #7 using /var/hammer/var/tmp
>>>           snapshots - disabled
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> cleanup /var/isos            - HAMMER UPGRADE: Creating snapshots
>>>        Creating snapshots in /var/hammer/var/isos
>>>  handle PFS #8 using /var/hammer/var/isos
>>>           snapshots - run
>>>               prune - run
>>>           rebalance - run..
>>>             reblock - run....
>>>              recopy - run....
>>> bash-4.1$
>>>
>>> You  must have noticed that snapshots were not taken for /tmp /usr/obj
>>> and /var/tmp.
>>> That is how it is automatically configured.
>>>
>>> Let us look in /var now
>>>
>>> bash-4.1$ ls
>>> account         backups         caps            cron            empty
>>>         hammer          lib             mail            preserve
>>>  rwho            tmp
>>> at              cache           crash           db              games
>>>         isos            log             msgs            run
>>>  spool           yp
>>>
>>> We have a new directory called "hammer" with the following subdirectories
>>>
>>> bash-4.1$ cd hammer/
>>> bash-4.1$ ls -l
>>> total 0
>>> drwxr-xr-x  1 root  wheel  0 Oct 13 11:51 home
>>> drwxr-xr-x  1 root  wheel  0 Oct 13 11:42 root
>>> drwxr-xr-x  1 root  wheel  0 Oct 13 11:43 tmp
>>> drwxr-xr-x  1 root  wheel  0 Oct 13 11:51 usr
>>> drwxr-xr-x  1 root  wheel  0 Oct 13 11:54 var
>>>
>>> Well let us look inside /var/hammer/usr since we are behind /usr ;-)
>>>
>>> bash-4.1$ cd usr/
>>> bash-4.1$ ls -l
>>> total 0
>>> drwxr-xr-x  1 root  wheel   0 Oct 13 11:54 obj
>>> lrwxr-xr-x  1 root  wheel  25 Oct 13 11:43 snap-20101013-1143 ->
>>> /usr/@@0x0000000117ac6cb0
>>>
>>> Ok we have a symlink pointing to some thing let us see what that is
>>>
>>> bash-4.1$ hammer snapls /usr
>>> Snapshots on /usr       PFS #3
>>> Transaction ID          Timestamp               Note
>>> 0x0000000117ac6cb0      2010-10-13 11:43:04 IST -
>>> bash-4.1$
>>>
>>> oh yes it is the snapshot that is availablr for /usr.
>>>
>>> I guess this gave you some idea.
>>>
>>> You can read more about snapshots, prune, reblance,reblock,recopy etc
>>> from
>>>
>>> http://leaf.dragonflybsd.org/cgi/web-man?command=hammer&section=ANY
>>>
>>> especially look under the heading
>>>
>>>   cleanup [filesystem ...]
>>>
>>> For mirroring I wrote this some time back
>>>
>>>
>>> http://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/
>>>
>>> hope this helps to satisfy your curiosity for now :-)
>>> let me know if you have questions
>>>
>>> --Siju
>>> _______________________________________________
>>> bsd-india mailing list
>>> [email protected]
>>> http://www.bsd-india.org/mailman/listinfo/bsd-india
>>>
>>
>>
>>
>> --
>> Regards
>>
>> Basil Kurian
>> http://twitter.com/BasilKurian
>>
>> Please do not print this e-mail unless it is absolutely necessary. SAVE
>> PAPER. Protect the environment.
>>
>>
>
>
> --
> Regards
>
> Basil Kurian
> http://twitter.com/BasilKurian
>
> Please do not print this e-mail unless it is absolutely necessary. SAVE
> PAPER. Protect the environment.
>
>


-- 
Regards

Basil Kurian
http://twitter.com/BasilKurian

Please do not print this e-mail unless it is absolutely necessary. SAVE
PAPER. Protect the environment.
_______________________________________________
bsd-india mailing list
[email protected]
http://www.bsd-india.org/mailman/listinfo/bsd-india

Reply via email to