pkgng dependencies change / update
Hi, I am trying to figure out how to change / update the dependencies on a package. I have a postfix package which comes from a server where mysql-client is in version 5.1 And I would like to install the same package on a server where mysql-client is in version 5.6 I am not sure if this is feasible. Of course when I try to install this package on the server, it tells me : jail: ns3 15:03:57 /home/gregober # pkg add postfix-2.10.0,1.txz Installing postfix-2.10.0,1...missing dependency mysql-client-5.1.68 Failed to install the following 1 package(s): postfix-2.10.0,1.txz I have tried to set the dependency to an updated version of the port : jail: ns3 15:04:16 /home/gregober # pkg set -o databases/mysql51-client:databases/mysql56-client Change origin from databases/mysql51-client to databases/mysql56-client for all dependencies? [y/N]: y But no luck !! Any idea how to do that ? «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ Your provider of OpenSource Appliances www.osnet.eu «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
ZFS install on a partition
Hi, I have a question regarding ZFS install on a system setup using an Intel Modular. This system runs various flavor of FreeBSD and Linux using a shared pool (LUNs). These LUNs have been configured in RAID 6 using the internal controller (LSI logic). So from the OS point of view there is just a volume available. I know I should install a system using HBA and JBOD configuration - but unfortunately this is not an option for this server. What would you advise ? 1. Can I use an existing partition and setup ZFS on this partition using a standard Zpool (no RAID). 2. Should I use any other solution in order to setup this (like full ZFS install on disk using the entire pool with ZFS). 3. Should I avoid using ZFS since my system is not well tuned and It would be asking for trouble to use ZFS in these conditions. P.S. Stability is a must for this system - so I won't die if you answer 3 and tell me to keep on using UFS. Thanks. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS install on a partition
Thanks for this documented answer. Couple of comments though… Le 18 mai 2013 à 02:03, Paul Kraus p...@kraus-haus.org a écrit : On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote: I know I should install a system using HBA and JBOD configuration - but unfortunately this is not an option for this server. I ran many ZFS pools on top of hardware raid units, because that is what we had. It works fine and the NVRAM write cache of the better hardware raid systems give you a performance boost. What would you advise ? 1. Can I use an existing partition and setup ZFS on this partition using a standard Zpool (no RAID). Sure. Be careful when you say RAID… I assume you mean RAIDzn configured top level vdevs. Remember, a mirror is RAID-1 and the base ZFS striping is considered RAID-0. So set it up as plain stripe of one vdev :-) Ok so I'll use a dedicated volume (LUN) and install it as a RAID-0 vdev. 2. Should I use any other solution in order to setup this (like full ZFS install on disk using the entire pool with ZFS). If the system is configured with existing LUNS use them. 3. Should I avoid using ZFS since my system is not well tuned and It would be asking for trouble to use ZFS in these conditions. No. One of the biggest benefits of ZFS is the end to end data integrity. IF there is a silent fault in the HW RAID (it happens), ZFS will detect the corrupt data and note it. If you had a mirror or other redundant device, ZFS would then read the data from the *other* copy and rewrite the bad block (or mark that physical block bad and use another). P.S. Stability is a must for this system - so I won't die if you answer 3 and tell me to keep on using UFS. ZFS is stable, it is NOT as tuned as UFS just due to age. UFS in all of it's various incarnations has been tuned far more than any filesystem has any right to be. I spent many years managing Solaris system and I was truly amazed at how tuned the Solaris version of UFS was. I have been running a number of 9.0 and 9.1 servers in production, all running ZFS for both OS and data, with no FS related issues. Ok - great answer. I have setup a FreeNAS ZFS appliance (running native HBAs + JBOD) and used it as a backup solution using snapshots. This is why I wanted to have ZFS at first. If you have any other advise - they are welcome. Thanks a lot. GB. Thanks. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS install on a partition
Le 18 mai 2013 à 06:49, kpn...@pobox.com a écrit : On Fri, May 17, 2013 at 08:03:30PM -0400, Paul Kraus wrote: On May 17, 2013, at 6:24 PM, b...@todoo.biz b...@todoo.biz wrote: 3. Should I avoid using ZFS since my system is not well tuned and It would be asking for trouble to use ZFS in these conditions. No. One of the biggest benefits of ZFS is the end to end data integrity. IF there is a silent fault in the HW RAID (it happens), ZFS will detect the corrupt data and note it. If you had a mirror or other redundant device, ZFS would then read the data from the *other* copy and rewrite the bad block (or mark that physical block bad and use another). I believe the copies=2 and copies=3 option exists to enable ZFS to self heal despite ZFS not being in charge of RAID. If ZFS only has a single LUN to work with, but the copies=2 or more option is set, then if ZFS detects an error it can still correct it. This option is a dataset option, is inheritable by child datasets, and can be changed at any time affecting data written after the change. To get the full benefit you'll therefore want to set the option before putting data into the relevant dataset. Ok, good to know. I planned to setup a consistent Snapshot policy and remote backup using zfs send / receive That should be enough for me… Is the overhead of this setup equal to double size used on disk ? -- Kevin P. Nealhttp://www.pobox.com/~kpn/ Nonbelievers found it difficult to defend their position in \ the presense of a working computer. -- a DEC Jensen paper «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Downgrading a port
Hello, I wanted to know if there is a way to simply downgrade a package I have installed with pkgng ? I know that there is portdowngrade, but I will have to reinstall all ports architecture to be able to install this. Sincerely yours. G.B. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
VIMAGE in GENERIC kernel
Hi, I just wanted to know if there were any plans to have VIMAGE function / features included in GENERIC kernels sometimes soon ? Sincerely yours. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Upgrading from 7.4 to 9.1
Hi, I wanted to know if you would consider updating from 7.4 to 9.1 directly ? Has anyone tried that with success ? I plan to use the freebsd-update method. Thanks for your feedback. G.B. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Upgrading from 7.4 to 9.1
Le 27 avr. 2013 à 18:11, Robert Huff roberth...@rcn.com a écrit : b...@todoo.biz writes: I wanted to know if you would consider updating from 7.4 to 9.1 directly ? Has anyone tried that with success ? Someone, somewhere. :-) I plan to use the freebsd-update method. Less sure about this. While it is certainly possible, many (myself included) will recommend a clean install. Doing so has the following advantages: 1) while it requires an unused disk, the old disk is still available if anything Goes Horribly Wrong(tm). 2) once the 9.1 system is fully functional, you can mount the 7.4 disk read-only and copy any needed files. 3) it will also de-clutter the file systems. 4) speaking of file systems, you can add/delete/change partitions to implement lessons learned since the 7.4 system was installed. Well to tell you the truth, the main reason I was asking is that I'll have to visit my datacenter in order to do a clean install… as opposed to remote upgrade. So I was wondering if that could be ok. But I'll stick to your advice. Unless someone else tells me that It is a painless rapid update. Thx. Respectfully, Robert Huff «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re-installing a system on a new LUN while system is up and running
Hello, I have an 7.4 system that I wish to update to 9.1 - It is a live mail server with couple of 100's persons on it. This system is deployed on an Intel modular which allows me to connect any LUN to this device. My idea was to create a new LUN and connect It to my system, then deploy the 9.1 version of the system on It, migrate the data on it and then reboot the system with everything updated, up and running… Is this feasible ? How do I have to proceed to do this ? How do I specify the target for the system to be deployed on the other pool of disks not on the live system ? When I reboot, how will I specify the new LUN as being the target system ? How do I recompile userland on the new system ? Is there a way to do that while running 7.4 (and specifying 9.1 binaries / architecture as target) ? Do you think this is the right solution to update my system with a minimum downtime or would your rather suggest the more classical way of doing things ? Thx. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ Your provider of OpenSource Appliances www.osnet.eu «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Hang on reboot with ZIL on SSD
Hi, I have a quite big server that I am tuning with FreeNAS running on It. It is based on an Intel Server and uses an Adaptec Controler ASR-6805 for a potential 12 disks pool (only 6 deployed for the moment). I have two more SSD disks intended for the ZIL cache. Connected directly on the mother board. It comes equipped with 32Gb of memory ECC. The system is installed on a specific dongle on the mother board (4Gb SLC dongle). The system is happy (= reboots without stopping at the real end of the reboot) as long as there is no SSD involved for the ZIL. As soon as the SSD are running, system freezes (or at least can't proceed with the reboot). It really freezes at the real end of the reboot after : Syncing disks, vnodes remaining…*0 0 0 0 done All buffers synced. Uptime: 3d4h12min I have to manually Power-Cycle the unit for It to complete the reboot. Here is the output of the dmesg : [root@freenas] ~# dmesg Copyright (c) 1992-2012 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 8.3-RELEASE-p5 #2 r244158M: Wed Dec 12 10:04:42 PST 2012 r...@build.ixsystems.com:/home/jpaetzel/8.3.0/os-base/amd64/usr/home/jpaetzel/8.3.0/FreeBSD/src/sys/FREENAS.amd64 amd64 Timecounter i8254 frequency 1193182 Hz quality 0 CPU: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz (2394.25-MHz K8-class CPU) Origin = GenuineIntel Id = 0x206d7 Family = 6 Model = 2d Stepping = 7 Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE Features2=0x17bee3ffSSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,AESNI,XSAVE,AVX AMD Features=0x2c100800SYSCALL,NX,Page1GB,RDTSCP,LM AMD Features2=0x1LAHF TSC: P-state invariant real memory = 34359738368 (32768 MB) avail memory = 33071357952 (31539 MB) ACPI APIC Table: INTEL S2600GZ FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 4 core(s) cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 2 cpu2 (AP): APIC ID: 4 cpu3 (AP): APIC ID: 6 WARNING: VIMAGE (virtualized network stack) is a highly experimental feature. ACPI Warning: Invalid length for Pm1aControlBlock: 32, using default 16 (20101013/tbfadt-707) ioapic0 Version 2.0 irqs 0-23 on motherboard ioapic1 Version 2.0 irqs 24-47 on motherboard kbd1 at kbdmux0 hpt27xx: RocketRAID 27xx controller driver v1.0 (Dec 12 2012 10:04:31) cryptosoft0: software crypto on motherboard aesni0: AES-CBC,AES-XTS on motherboard acpi0: INTEL S2600GZ on motherboard acpi0: [ITHREAD] acpi0: Power Button (fixed) acpi0: reservation of 0, 9d000 (3) failed Timecounter ACPI-fast frequency 3579545 Hz quality 1000 acpi_timer0: 24-bit timer at 3.579545MHz port 0x408-0x40b on acpi0 cpu0: ACPI CPU on acpi0 cpu1: ACPI CPU on acpi0 cpu2: ACPI CPU on acpi0 cpu3: ACPI CPU on acpi0 pcib0: ACPI Host-PCI bridge port 0xcf8-0xcff on acpi0 pci0: ACPI PCI bus on pcib0 pcib1: ACPI PCI-PCI bridge irq 47 at device 1.0 on pci0 pci1: ACPI PCI bus on pcib1 pcib2: ACPI PCI-PCI bridge irq 47 at device 1.1 on pci0 pci2: ACPI PCI bus on pcib2 igb0: Intel(R) PRO/1000 Network Connection version - 2.3.1 port 0x1060-0x107f mem 0xd216-0xd217,0xd21b-0xd21b3fff irq 27 at device 0.0 on pci2 igb0: Using MSIX interrupts with 5 vectors igb0: Ethernet address: 00:1e:67:54:9f:cd igb0: [ITHREAD] igb0: [ITHREAD] igb0: [ITHREAD] igb0: [ITHREAD] igb0: [ITHREAD] igb1: Intel(R) PRO/1000 Network Connection version - 2.3.1 port 0x1040-0x105f mem 0xd214-0xd215,0xd21a-0xd21a3fff irq 30 at device 0.1 on pci2 igb1: Using MSIX interrupts with 5 vectors igb1: Ethernet address: 00:1e:67:54:9f:ce igb1: [ITHREAD] igb1: [ITHREAD] igb1: [ITHREAD] igb1: [ITHREAD] igb1: [ITHREAD] igb2: Intel(R) PRO/1000 Network Connection version - 2.3.1 port 0x1020-0x103f mem 0xd212-0xd213,0xd219-0xd2193fff irq 28 at device 0.2 on pci2 igb2: Using MSIX interrupts with 5 vectors igb2: Ethernet address: 00:1e:67:54:9f:cf igb2: [ITHREAD] igb2: [ITHREAD] igb2: [ITHREAD] igb2: [ITHREAD] igb2: [ITHREAD] igb3: Intel(R) PRO/1000 Network Connection version - 2.3.1 port 0x1000-0x101f mem 0xd210-0xd211,0xd218-0xd2183fff irq 29 at device 0.3 on pci2 igb3: Using MSIX interrupts with 5 vectors igb3: Ethernet address: 00:1e:67:54:9f:d0 igb3: [ITHREAD] igb3: [ITHREAD] igb3: [ITHREAD] igb3: [ITHREAD] igb3: [ITHREAD] pcib3: ACPI PCI-PCI bridge irq 47 at device 2.0 on pci0 pci4: ACPI PCI bus on pcib3 pcib4: ACPI PCI-PCI bridge irq 47 at device 2.2 on pci0 pci5: ACPI PCI bus on pcib4 pcib5: ACPI PCI-PCI bridge irq 16 at device 3.0 on pci0 pci6: ACPI PCI bus on pcib5 aacu0: Adaptec RAID Controller mem
ZFS + iSCSI architecture
Hello, I am about to start deploying a large system (about 18 To which can grow up to 36 To) based on a big Intel platform with lot's of fancy features to have turbo boosted platform (ZIL on SSD + system on dongle if I go for FreeNAS). Since I want to move on quite fast I might decide to use FreeNAS in it's latest version. The idea behind all that was to grant 5 or six critical servers access to the NAS so that they can take advantage of : 1. space available on the NAS 2. ability of the NAS to use ZFS and of clients to support this file system (including snapshots) 3. Access the server using iSCSI (at least this is what I initially planned). 4. Mount part of their filesystem using data stored on the SAN (like /usr/local/ or other parts of the system). The server accessing the data will be of two types : 1. 2 x Ubuntu server 10.04 LTS 2. 4 x FreeBSD (mainly 8 and 9) with jail configured I have started reading about iSCSI and potential problems with FreeBSD. So my main questions would be : • Should I go for iSCSI ? • Should I rather choose / prefer NFS ? • Should I export a Volume as UFS rather than ZFS (is ZFS supported as a target) ? The main idea is stability, redundancy of data and ease of maintenance (in a headless FreeBSD / Linux world) before anything else ! That's the big pictures, if you have any pointers, advise, they are all welcome. It is quite late where I leave, so I will reply to posts in 8 to 10 hours, but I hope to have enough answer(s) to start an interesting thread (as I think this question is very interesting and not so clearly explained (at least in my mind))… Thx very much for your infos and feedback. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: ZFS + iSCSI architecture
Le 20 févr. 2013 à 02:14, Fleuriot Damien m...@my.gd a écrit : On Feb 19, 2013, at 11:20 PM, b...@todoo.biz b...@todoo.biz wrote: Hello, I am about to start deploying a large system (about 18 To which can grow up to 36 To) based on a big Intel platform with lot's of fancy features to have turbo boosted platform (ZIL on SSD + system on dongle if I go for FreeNAS). Since I want to move on quite fast I might decide to use FreeNAS in it's latest version. The idea behind all that was to grant 5 or six critical servers access to the NAS so that they can take advantage of : 1. space available on the NAS 2. ability of the NAS to use ZFS and of clients to support this file system (including snapshots) 3. Access the server using iSCSI (at least this is what I initially planned). 4. Mount part of their filesystem using data stored on the SAN (like /usr/local/ or other parts of the system). The server accessing the data will be of two types : 1. 2 x Ubuntu server 10.04 LTS 2. 4 x FreeBSD (mainly 8 and 9) with jail configured I have started reading about iSCSI and potential problems with FreeBSD. What problems do you mean ? For example : - Can my client (the initiator) directly mount a ZFS volume on freeBSD using iSCSI or should I go back to formatting It to UFS ? - Is the iSCSI stack in FreeBSD stable an mature enough to be used in a production environment ? == It is out of scope to have kernel panic because of an unstable iSCSI related problem. So my main questions would be : • Should I go for iSCSI ? Well in all use cases, iscsi should perform faster than NFS. Fast is good - stable is necessary in this case ! And this is what I am tring to evaluate… • Should I rather choose / prefer NFS ? • Should I export a Volume as UFS rather than ZFS (is ZFS supported as a target) ? I'm not sure what you mean here, when you export a zvol over ISCSI: - your SAN is the target and presents a block device (the zvol) - your client is the initiator - your client attaches to the ISCSI drive and formats it using filesystem XYZ, be it ext3, ufs or ntfs Thanks for this reminder about vocabulary for iSCSI, I'll try to stick to It ;-) The main idea is stability, redundancy of data and ease of maintenance (in a headless FreeBSD / Linux world) before anything else ! ISCSI is a bit harder to setup IMO, however I think it''s more reliable than NFS, what with its auto retries if it loses the network link to a device. Have you deployed this in production and what are your concerns and recommendations ? That's the big pictures, if you have any pointers, advise, they are all welcome. It is quite late where I leave, so I will reply to posts in 8 to 10 hours, but I hope to have enough answer(s) to start an interesting thread (as I think this question is very interesting and not so clearly explained (at least in my mind))… This is idd a very interesting topic and I hope to see more :) There is also an interesting (and fresh) post here : http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFreeBSDvsIllumos?showcomments#comments Thx very much for your infos and feedback. «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ BSD - BSD - BSD - BSD - BSD - BSD - BSD - BSD - «?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§«?»¥«?»§ PGP ID -- 0x1BA3C2FD ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org