Re: [OpenIndiana-discuss] zabbix-agent
From: Pawel Stefanski [mailto:pejo...@gmail.com] here you have complete instruction https://www.zabbix.com/wiki/howto/install/solaris/opensolaris I know. I described that as Plan B. See: Plan A is to get it from a standard package repository, and Plan B is to get the solaris binaries from zabbix.com. If Plan A doesn't exist, that's ok. I just thought zabbix-agent was in some standard package repository because of Adam Stevko's and Andrzej Szeszo's emails discussing it in the package repo. Andrzej is apparently the maintainer of http://wiki.openindiana.org/oi/Package+Repositories so... That led to credible belief that I should expect to find the package in there... ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] zabbix-agent
Hi. I'm trying to figure out the best easiest way to get zabbix-agent installed on openindiana. The post here suggests that it should already be included in some standard package repository, but I don't see it in dev, sfe, or sfe-encumbered. http://openindiana.org/pipermail/oi-dev/2012-January/001092.html Sure I can build from source, or download solaris binaries from zabbix.com. I will if I have to. Plan A is to get it from a standard package repository, and Plan B is to get the solaris binaries from zabbix.com. ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] ZFS allowed characters (valid characters)
Is ZFS using Unicode or ASCII? Or something else? Are there disallowed characters? '\0' or @ or '/' or anything else? I know these characters generally would be *difficult* to use just because of limitations of your application environment (for example, bash will always parse the '/' as a directory delimiter, but at least in hfs+, that character isn't actually forbidden by the filesystem.) So I'm not asking which characters would be *practically* difficult to use, I'm asking which characters are *valid* according to the filesystem itself. ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS allowed characters (valid characters)
From: Edward Ned Harvey (openindiana) [mailto:openindi...@nedharvey.com] Is ZFS using Unicode or ASCII? Or something else? Are there disallowed characters? '\0' or @ or '/' or anything else? I know these characters generally would be *difficult* to use just because of limitations of your application environment (for example, bash will always parse the '/' as a directory delimiter, but at least in hfs+, that character isn't actually forbidden by the filesystem.) So I'm not asking which characters would be *practically* difficult to use, I'm asking which characters are *valid* according to the filesystem itself. Are the valid characters the same for filesystem names, and filenames? (e.g. mountpoints zvol's, versus directories and files) ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Real performance hit raidz2 5 disks vs 6?
From: Hans J Albertsson [mailto:hans.j.alberts...@gmail.com] I will use it for media files over cifs and/or dlna, almost exclusively. Thus it'll be almost consistently write once/read many, and most files will be large. In your case, performance will be almost irrelevant. Because even with a single disk, you would be able to do over 1Gbit/sec, but with your raidz or raidz2, you'll be able to go much faster. But I bet the network you're connected to is 1Gbit or slower... And... Even if you're streaming the full BluRay disc or something, you only need about 8Mbit. So your demands will be far lower than even the lowest performance setup you could imagine creating. ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Is this dell xeon a decent buy
From: Harry Putnam [mailto:rea...@newsguy.com] http://www.ebay.com/itm/Dell-Poweredge-6650-Enterprise-Server-2-x- Intel-Xeon-1-5-GHz-8-GB- /261423732486?pt=COMP_EN_Servershash=item3cde119706 I didn't actually follow the link, so this might be irrelevant: IMHO, don't use the motherboard integrated broadcom ethernet. Use an add-on intel nic. ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Testing from myself
hi ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Testing from myself
again -Original Message- From: Edward Ned Harvey (openindiana) [mailto:openindi...@nedharvey.com] Sent: Friday, May 23, 2014 8:51 AM To: 'openindiana-discuss@openindiana.org' Subject: [OpenIndiana-discuss] Testing from myself hi ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ openindiana-discuss mailing list openindiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Critical security issue notification
From: Udo Grabowski (IMK) [mailto:udo.grabow...@kit.edu] Moral: Never run a changing system ! Heheh, I hope the irony is intentional. ;-) Like Never get vaccines, because sometimes vaccines cause problems. ;-) It's true that sometimes updates cause problems, but there are *more* problems without. The irony of suggesting that 0.9.8 is better than 1.1.0... If anybody cares, could be easily dismantled by just reading the changelog of the openssl releases... http://git.openssl.org/gitweb/?p=openssl.git;a=blob_plain;f=CHANGES;hb=HEAD The latest 0.9.8 is 4 years old. Since then, I see many security vulnerabilities fixed... CVE-2010-3864, CVE-2010-4252, CVE-2010-4180, CVE-2011-0014, etc. Point is, as soon as there's any security vulnerability discovered, it both gets *published* so the world knows about it, and it also gets patched. If you don't keep up with patches, you're literally publishing your vulnerabilities to the world, for everyone to see, and then sitting back and neglecting to patch it up. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] USB boot Sometimes?
From: Udo Grabowski (IMK) [mailto:udo.grabow...@kit.edu] So - Thank you, Udo and Wim. I tried several USB2 thumb drives and USB2 hard drives, and they all worked. I only have one USB3 device, and it doesn't work, even though the machine itself has USB2 and therefore USB3 isn't being used. So I think most likely the correct explanation is as Udo said. Most likely the USB3 thumb drive itself doesn't do good USB2 backward compatibility. So anybody out there reading this - Either get the fastest USB2 device you can find, or get an assortment of USB3 devices so you can hopefully find one that's backward compatible USB2. The non-bootable USB3 devices I have which don't work: Mushkin Atom MKNUFDAM32GB ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] USB boot Sometimes?
From: w...@vandenberge.us [mailto:w...@vandenberge.us] My guess would be that this is due to the lack of USB3 support in OpenIndiana. Have you tried plugging the drive into a USB2 port, forcing the device into USB2 mode, and seeing if it works (I know that's not what you really want but it would narrow down the issue) Good guess. I checked - although the usb device is usb3, the machine hardware is usb2. So I *think* that rules out the usb3 incompatibility possibility. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] USB boot Sometimes?
I installed oi server to an old slow 4G usb thumb drive as a test, just to see if it's possible. It worked fine; it's just slow as hell. So I bought a pair of 32GB usb3 fast devices, and installed oi to one of them... But grub fails. It just boots up to a grub menu and stops there. So I thought maybe I need to upgrade my bios... Upgraded, no effect. So I thought maybe there's a device size limit that I'm exceeding... So I hooked up an external USB 80GB hard drive. I installed to the USB 80G and booted just fine. Something smaller works. Something bigger works. The only thing that *doesn't* work is the thing I care about. Can anybody shed any light? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] How to let find 'see' .zfs directories
From: Timothy Coalson [mailto:tsc...@mst.edu] You could instead test for the existence of the .zfs directory in all folders, with some kind of find . -type d -exec 'test -d {}/.zfs' This is what zhist does under the hood. It's not the same as 'find' but useful in a lot of cases. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] NFS
From: Ryan John [mailto:john.r...@bsse.ethz.ch] Being reliant on NFS myself, I decided to test this. I just updated one test machine from OI_a8 to OI_a9, and another from OI_a7 to OI_a9 Both machines give the same results, IE: NFSv3 works okay. My clients are RetHat EL6. I wonder what version of nfs ships with rhel/centos6? If you don't post back, I'll look it up later today and post. It seems also, I'll have to start questioning if there's simply something wrong with my *individual* system, rather than the distro. In other words, verify checksum of installation media, repeat on another system... Also, I wonder if there's a difference between your system and mine, resulting from the fact you did an upgrade, whereas I installed fresh. Are you using OI desktop, or OI server? I'm using OI server 151a9... And I'm using OI desktop 151a7. OpenIndiana (powered by illumos)SunOS 5.11oi_151a9November 2013 root@ openindiana:~# zfs create dataPool/nfstest root@ openindiana :~# zfs set sharenfs=on dataPool/nfstest root@ openindiana:~# sharectl set -p server_versmax=3 nfs Did you remember to restart nfs server after making that change? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] NFS
From: Jonathan Adams [mailto:t12nsloo...@gmail.com] Sent: Thursday, January 30, 2014 4:06 AM If a share was mounted on the client and you change the underlying NFS version on the server then you will need to get the client to unmount all shares from the server before they can see the version 3 shares ... is this the case in your instance? Are your shares auto-mounted? if so it depends on which system you're using but it might be quicker to reboot ... :( For now, I'm just trying to make it work. Later I can make it automount or whatever, so at present, it goes like this: (On server) sudo zfs set sharenfs=rw=@10.10.10.14/32,root=@10.10.10.14/32 storage/ad1 (on ubuntu 10.04 client) root@orion:~# mount -v -t nfs storage1:/storage/ad1 /mnt mount.nfs: timeout set for Thu Jan 30 09:01:24 2014 mount.nfs: text-based options: 'addr=10.10.10.13' mount.nfs: mount(2): Input/output error mount.nfs: mount system call failed Since that didn't work, I try with ESXi 5.5 client, I repeat, eliminating the @ symbols, eliminating the /32, and putting them back in there... Set the versions as follows: sudo sharectl set -p server_versmax=3 nfs sudo svcadm refresh svc:/network/nfs/server:default Retry all the variations of set sharenfs and repeat trying to mount... Still nothing works... I wondered if maybe I have firewall enabled on the server. So I used nc and telnet from the client to confirm the port is open. (111 and 2049). No problem. The only thing that *does* work: When I have the 151a7 box mount the 151a9 box using nfs v4, then it works. But if I reduce the server and client both to v3, then even THEY fail to mount too. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] NFS
From: Jonathan Adams [mailto:t12nsloo...@gmail.com] to test if it is a permissions problem, can you just set sharenfs=on? and then try to access from the other machines? Thanks for the help everyone. I decided to take it a step further than that: On both the 151a7 (homer) and 151a9 (marge) machines: sudo sharectl set -p client_versmax=4 nfs sudo sharectl set -p server_versmax=4 nfs sudo svcadm restart nfs/server On 151a9 (marge) sudo zfs create storage/NfsExport sudo zfs set sharenfs=on storage/NfsExport On 151a7 (homer) sudo zfs create storagepool/NfsExport sudo zfs set sharenfs=on storagepool/NfsExport Now, from ubuntu 10.04, try mounting them both: (as root) mkdir /mnt/151a7 mkdir /mnt/151a9 mount -v -t nfs marge:/storage/NfsExport /mnt/151a9 mount.nfs: timeout set for Thu Jan 30 16:00:50 2014 mount.nfs: text-based options: 'addr=10.10.10.13' mount.nfs: mount(2): Input/output error mount.nfs: mount system call failed mount -v -t nfs homer:/storagepool/NfsExport /mnt/151a7 mount.nfs: timeout set for Thu Jan 30 16:01:29 2014 mount.nfs: text-based options: 'addr=10.10.10.242' homer:/storagepool/NfsExport on /mnt/151a7 type nfs (rw) (Notice, it worked for 151a7, and failed for 151a9) Now, I'll have the two OI machines mount each other - which should pretty well answer any questions about firewall and RPC ports, etc. (151a7 machine mounting 151a9) (Success) sudo mount -F nfs marge:/storage/NfsExport /mnt eharvey@homer:~$ df -h /mnt FilesystemSize Used Avail Use% Mounted on marge:/storage/NfsExport 11T 31K 11T 1% /mnt (151a9 machine mounting 151a7) (Success) sudo mount -F nfs homer:/storagepool/NfsExport /mnt eharvey@marge:~$ df -h /mnt FilesystemSize Used Avail Use% Mounted on homer:/storagepool/NfsExport 4.3T 44K 4.3T 1% /mnt Now dismount them all, on all machines. Reduce the versions to 3, (on both 151a7 and 151a9) sudo sharectl set -p client_versmax=3 nfs sudo sharectl set -p server_versmax=3 nfs sudo svcadm restart nfs/server And try again. Attempt to mount again from ubuntu client. Once again, 151a7 works and 151a9 fails. Attempt to mutually mount 151a7 to 151a9 and vice-versa... 151a7 cannot mount 151a9 sudo mount -F nfs marge:/storage/NfsExport /mnt nfs mount: marge: : RPC: Rpcbind failure - RPC: Authentication error nfs mount: retrying: /mnt nfs mount: marge: : RPC: Rpcbind failure - RPC: Authentication error 151a9 mounts 151a7 just fine sudo mount -F nfs homer:/storagepool/NfsExport /mnt eharvey@marge:~$ df -h /mnt FilesystemSize Used Avail Use% Mounted on homer:/storagepool/NfsExport 4.3T 44K 4.3T 1% /mnt ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] NFS
From: Edward Ned Harvey (openindiana) It *appears* that NFSv4 is fine in both 151a7 and 151a9. It *appears* that NFSv3 is broken in 151a9. Which was unfortunately, necessary to support ESXi client and Ubuntu 10.04 client. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] NFS
At home, I have oi_151a7 and ESXi 5.1. I wrote down precisely how to share NFS, and mount from the ESXi machine. sudo zfs set sharenfs=rw=@192.168.5.5/32,root=@192.168.5.5/32 mypool/somefilesystem I recall it was a pain to get the syntax correct, especially thanks to some innacuracy in the man page. But I got it. Now at work I have oi_151a9 and ESXi 5.5. I also have oi_151a7 and some ubuntu 12.04 servers and centos 5 6 servers. On the oi_151a9 machine, I do the precise above sharenfs command. Then the oi_151a7 machine can mount, but both centos, ubuntu, and ESXi clients all fail to mount. So I think, ah-hah! That sounds like a NFS v3 conflict with v4! So then I do this: sudo sharectl set -p client_versmax=3 nfs sudo sharectl set -p server_versmax=3 nfs sudo svcadm refresh svc:/network/nfs/server:default Now, *none* of the clients can mount. So I put it back to 4, and the openindiana client can mount. Is NFS v3 simply broken in the latest OI? When I give the -v option to mount, I get nothing useful. There is also nothing useful in the nfsd logs. The only thing I have left to test... I could try sharing NFS from the *old* oi server, and see if the new ESXi is able to mount it. If ESXi is able to mount the old one, and not able to mount the new one, that would be a pretty solid indicator v3 is broken in 151_a9. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Expanding storage with JBOD
From: Dave Pooser [mailto:dave...@pooserville.com] Sent: Tuesday, January 7, 2014 12:22 PM Sans Digital makes one-- their part number TR4M6GNC, selling at NewEgg for $134.99. I'd never use one myself, because they seem to be a recipe for weird flakiness and I/O hangs, and ZFS does not deal well with SATA kinda- sorta-maybe failure modes. But they exist. Thanks for pointing that out. Interestingly, the product I was looking at was the Sans Digital ES104T, which is just a chassis power supply, and four ESATA pass-thru cables going to the individual drives. I know ZFS works fine with SATA in general, as long as there isn't some junky SATA chip or driver in the mix. So the flakiness you described might be on a different product, or it might be introduced by the SATA multiplier in that system. The aforementioned ES104T, I would have absolute faith in, because it's literally brainless. ;-) And hence comes with the disadvantage of requiring the 4 separate cables. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Expanding storage with JBOD
From: Hans J Albertsson [mailto:hans.j.alberts...@branneriet.se] Sent: Tuesday, January 7, 2014 5:01 AM I thought that might be what you were talking about, but I wanted to be sure. Shelf, fix icy box and PS, short pins by clips-and-wire and add som isolation for safe op? Can you suggest a cheap and commonly available sata pci card that can work in an X7BSi mb based computer? This has been a long thread, that has confused me a bit. Can you remind us what you're trying to do? Since I'm currently doing this, and learned the hard way, I'd like to inform you: Suppose you have a computer chassis, with enough internal SATA ports, but not enough internal space to hold drives. So you want to add external drives. So you dangle a SATA cable out the computer, and you want to power an external drive using an external ATX power supply. (This is what I did). Then you short the pins, as Saso suggested, in order to get the ATX power supply to turn on. BUT you MUST connect the external ATX ground to the computer ground. Simply sharing the common ground pin on your wall outlet isn't enough. (I fried two of my hard drives. That's how I learned the hard way.) The easiest way is to simply take out one of the screws of the ATX power supply, put an eyelet tipped wire there (with the screw put back in obviously) and connect the other end of the wire to the computer chassis. Just wire the two chassis' together. If you look at an ESATA cable, it's exactly the same as the SATA cable, except that it includes an external ground connector. If you buy something like a Sans Digital (I like the brand) ESATA external enclosure connected via ESATA, then the ESATA cable guarantees the external chassis and internal chassis will share the common ground. If all you want to add is a couple (up to maybe 4) drives, then SATA works fine. Beyond that point, SAS becomes the obvious better solution. Because in SATA, you have to run a separate cable for every drive. While in SAS, you get 24 Gbit in a single cable. (4 channels of shared 6Gbit) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Expanding storage with JBOD
From: Saso Kiselkov [mailto:skiselkov...@gmail.com] Nope, SATA does have port multipliers, though I agree that beyond a certain point it becomes a mess. Now that you mention it, when I look around, everything that I find called a SATA port multiplier is some sort of add-on card that takes an upstream SATA port and gives you several downstream SATA ports. I don't see the point. If you have room for an add-on card, I would think, it would make more sense to just add another SATA adapter to the PCIe bus. The main point is, a port multiplier might be useful if it's *outside* your system. Supposing you have an external enclosure that holds 4 drives or something like that... You might say, I have SATA 6Gbit bus which is good enough for the 4 drives in that enclosure. And I would tend to agree, if they let you run a single 6Gbit SATA cable to the enclosure and then internally they use a SATA port multiplier, that would be pretty nice. But when I look around, I don't see any 4-drive enclosures that use a single upstream 6Gbit ESATA bus... I see enclosures with 4 bays, and 4 ESATA ports. One for each drive. By comparison, the same exact class of products certainly exist, with 4 drives (or even 16 or 24) on a single SAS (Mini-SAS, SFF-8088) cable. Internally, the enclosures have SAS Expander to fanout to the drives. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Expanding storage with JBOD
From: Roman Naumenko [mailto:ro...@naumenko.ca] I don't know if even 2TB will fill fast enough to justifying any investment into storage expansion. I don't get that comment. Speaking about storage expansion, even HBA cards are dirt cheap, pricing on enclosures with integrated SAS expander is just nuts. Can't figure out how to add 8-12 disks externally to the storage server without paying for this 2x cost of the server itself. Depends how much you paid for the server itself. ;-) But that's kind of irrelevant. Often, the storage *is* more expensive than the rest of the server. Depends on how much storage you're adding. There's a cost for the HBA, for the drive bay, and then a cost for the disks, multiplied by the number of disks. So the disks add up quickly. I'm a fan of Sans Digital products for the price. And I'm a fan of using the disks that they recommend. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS keeps finding errors
-Original Message- From: Jim Klimov [mailto:jimkli...@cos.ru] Sent: Saturday, December 21, 2013 10:13 AM Recycle. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] real-time syncing two directories across servers (unison ?)
From: Gregory Youngblood [mailto:greg...@youngblood.me] Check out owncloud. The open source components might be useful. I personally, and two other IT guys that I've spoken with from different companies, have been burned by placing any trust in owncloud. In fact, I'm still subscribed to their mailing list, and at least once a day, new people write in, asking for help, because it either fails to sync files it should sync, or some data seems to be lost. My advice: Steer clear. In Microsoft, what you're looking for is called DFS. I don't know any good linux/unix based bidirectional tools such as requested, but if you start searching for alternatives to DFS (or if you simply pay MS and deploy DFS) then you have an answer, and hopefully can find some FOSS solution out there. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] 10GigE vs Infiniband vs SCSI Target ...
No responses Anybody? -Original Message- From: Edward Ned Harvey (openindiana) [mailto:openindi...@nedharvey.com] Sent: Monday, November 18, 2013 7:35 AM To: 'openindiana-discuss@openindiana.org' Subject: [OpenIndiana-discuss] 10GigE vs Infiniband vs SCSI Target ... ZFS is great to manage backend storage in a SAN environment. So then you're likely to use 10GigE, or Infiniband as the transport... I only recently discovered SAS SFF-8088. Gives you 4x 6Gbit buses yielding 24 Gbit with very low overhead, low cost. A lot of performance for the buck. I also recently discovered Linux has something called SCST, a driver of sorts, that turns some linux HBA into a scsi target. Does openindiana have something similar? It would sure beat the pants off 10GigE, and while Infiniband would still be faster, it would be very useful to do the scsi target thing for smaller systems... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] 10GigE vs Infiniband vs SCSI Target ...
From: jason matthews [mailto:ja...@broken.net] does that help? Thank you, what I was looking for was: I want to connect the vmware servers to the openindiana server using SAS hardware. Beat the performance of Ether, and not as expensive (or as difficult) as Infiniband. Let the openindiana server present a zvol (or whatever) as a scsi target on the SAS bus, so as far as vmware can tell, there's just a hard disk on the other end of this SAS cable. Vmware would have no idea it was actualy a ZFS volume or anything. I think Saso answered it. there was a SAS target driver for some LSI 1068-based chips in the old 2009-era OpenSolaris days, but ultimately that didn't lead anywhere and it fell by the wayside. and nobody appears to be working on a SAS target driver ATM, as far as I can tell. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] 10GigE vs Infiniband vs SCSI Target ...
ZFS is great to manage backend storage in a SAN environment. So then you're likely to use 10GigE, or Infiniband as the transport... I only recently discovered SAS SFF-8088. Gives you 4x 6Gbit buses yielding 24 Gbit with very low overhead, low cost. A lot of performance for the buck. I also recently discovered Linux has something called SCST, a driver of sorts, that turns some linux HBA into a scsi target. Does openindiana have something similar? It would sure beat the pants off 10GigE, and while Infiniband would still be faster, it would be very useful to do the scsi target thing for smaller systems... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] [zfs] problem on my zpool
From: Clement BRIZARD [mailto:clem...@brizou.fr] NAME STATE READ WRITE CKSUM nas UNAVAIL 63 2 0 insufficient replicas raidz1-0 DEGRADED 0 0 0 c8t50024E9004993E6Ed0p0 ONLINE 0 0 0 c8t50024E92062E7524d0ONLINE 0 0 0 c8t50024E900495BE84d0p0 ONLINE 0 0 0 c8t50014EE25A5EEC23d0p0 ONLINE 0 0 0 c8t50024E9003F03980d0p0 ONLINE 0 0 1 (repairing) c8t50014EE2B0D3EFC8d0ONLINE 0 0 0 c8t50014EE6561DDB4Cd0p0 DEGRADED 0 0 211 too many errors (repairing) c8t50024E9003F03A09d0p0 ONLINE 0 018 (repairing) raidz1-1 UNAVAIL131 9 0 insufficient replicas c50t8d0 REMOVED 0 0 0 (repairing) c2d0 ONLINE 0 0 0 (repairing) c1d0 ONLINE 0 0 0 (repairing) c50t11d0 ONLINE 0 0 0 (repairing) c50t10d0 REMOVED 0 0 0 Something bad happened. You've had more failures than the redundancy level protects - which means some data is lost. Sometimes other people here chime in with zdb debugging information and other stuff, that helps you recover gracefully - but the first and most obvious question is backups. Do you have backups you could restore from? Also, it's not clear what the cause of failure is. Could be two disks coincidentally going bad at the same time. Could be controller failure. Or CPU or memory error. Could be you're running on unsupported hardware with some kind of driver or firmware bug. Even if you can restore, will the problem soon come back? Not clear based solely on what info you've provided... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Good enterprise hardware
From: Mark Creamer [mailto:white...@gmail.com] Sent: Sunday, October 20, 2013 12:19 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Good enterprise hardware For whatever it's worth, I've been buying SM servers for the last couple years. I had a motherboard failure and they had a replacement to me next morning. I didn't have to ship anything except the defective board once I replaced it. Mark Dang, this is a recent conversation between me and the would-be sales person at Silicon Mechanics: He said: 3yr 24x7 4Hr sameday on-site warranty We don't recommend this option, the 4-hour response is meaningless unless you have spare parts on hand. We recommend Next Business Day with a spares kit.. The tech will not come on site unless you have replacement parts on hand. If you are able to replace a mem stick, HDD or add on card.. you should buy just a spares kit and do it yourself. IF it's a mother board or back plane failure, most likely the box will come back to us. Our warranty 3 years return to depot/advance parts replacement when available. I said: Shoot. Normally, 4 hr same day onsite warranty means the vendor keeps parts available, and will dispatch a technician with parts, same day, for any and all types of failures - motherboard, cpu, power supply, whatever. This is critical for business critical servers. Never send back to a depot. Business can't afford the risk of downtime stretching on for days or weeks. Nor can they afford to simply buy two of everything. He said: Thanks for the refresher in services. I do this for a living and deploy hardware to some of the major data centers in the u.s. and all over the world for some major corporations that have found value in doing business with us and use the support services that I offered you. The reality is that hardware fails and we design that hardware to be as redundant as your application requires.. I do recall that the first solution I submitted had redundacy in mind..but you did not see the value in that. (I don't know why he said that, cuz it's not correct) We and most of the white box integrators offer support through 3rd party services. It would be impossible for them to show up with parts for every system out there...take a look at the supermicro web offering and you might understand... The cost of a drive or a stick of memory over lets say 4 years are minimal. You can buy lots of spare parts for the cost of the support level you want.. By the way, return to depot with advace replacement is a pretty good thing.. My suggestion to you is to go with one of the boutique vendors that are more in line with your needs such as hp ibm etc I didn't bother replying to the above, but the systems he's quoting me are on-par (slightly cheaper, but not dramatically) with the oracle systems including that level of service. So the idea of buying lots of spare parts instead of enterprise support ... might hold true if you're buying a lot cheaper stuff, but in the present situation, I'm not seeing it. If he wanted to focus on advance replacement as you mentioned above Mark, then maybe there would be a good solution here. If advance replacement means they ship you replacement parts super super fast - like within 4 hrs, or even next day (including non-business days) then there might be an acceptable level of support here. But I didn't even get to the comment about advance replacement is a pretty good thing before I had already made up my mind based on attitude and lack of support... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Good enterprise hardware
I'm planning to build a ZFS storage server in the datacenter. Mission critical storage for virtualization, requiring hardware support, 24/7, 4hr, sameday onsite. I thought I was going with silicon mechanics, and just learned, that their 24/7 4hr sameday service is pointless - because they don't get parts to you. You have to buy spare parts including memory and whatever else, but if something like a motherboard or CPU dies, you still have to ship it back to the depot. My next thought is, obviously I can get some good hardware with good support from oracle. But I'd like to know what other alternatives there are. Recommendations? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Aneurin Price [mailto:aneurin.pr...@gmail.com] Is this in IDE emulation mode per-chance? If your controller acts this way in AHCI mode, it's defective and should be replaced Alright, I'll try again. Gimme a couple days to respond, as this is what I do for offsite data rotation. I would never intentionally use IDE mode. I would always use AHCI mode when/if available. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Joshua M. Clulow [mailto:j...@sysmgr.org] This is emphatically false. Though the pestilence of cfgadm(1M) and the idea that device replacement is somehow advantageously manual had persisted inside Sun's walls until its untimely demise, SunOS itself is certainly capable of hot-plugging devices. If you have a SAS controller attached via the mpt_sas driver, for instance, you can absolutely expect hot plugged disks to go away and return automatically. If not, then you are experiencing bugs in the system and should report them as such so that we can fix them. SATA disks attached to SATA controllers supported through the sata framework are something of a special case. There is a tuneable that prevents SATA hot plug from working, which is unfortunately enabled by default -- again due to history and the misguided manual configuration crowd. You can enable it in /etc/system, though. The variable in question is sata_auto_online, visible (with comment) here: The system I just tested on is a Dell Precision workstation, and it uses sata.so.1. I mostly mention that, because others here have mentioned assuming the use of mpt_sas, which is different, and uses some variation of -x remove_device instead of -c disconnect The following procedure worked for me: export EXTERNALDISK=c3t4d0 export SATAPORT=`sudo cfgadm -al | grep $EXTERNALDISK | sed 's/:.*//'` ; echo ; echo EXTERNALDISK is: $EXTERNALDISK ; echo SATAPORT is: $SATAPORT ; echo # Result: # EXTERNALDISK is: c3t4d0 # SATAPORT is: sata0/4 sudo cfgadm -y -c unconfigure $SATAPORT sudo cfgadm -y -c disconnect $SATAPORT Now it is safe to disconnect your drive, and connect a new drive in its place. After new drive is connected, do this: sudo cfgadm -y -c connect $SATAPORT sudo cfgadm -y -c configure $SATAPORT sudo devfsadm -Cv As Josh describes, perhaps the unconfigure/disconnect/connect/configure can be skipped, by tuning sata_auto_online. But as far as I'm concerned, the present result is good enough. I don't care to bother testing sata_auto_online. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Christopher Chan [mailto:christopher.c...@bradbury.edu.hk] Sent: Tuesday, October 08, 2013 8:42 PM Er...isn't hotswap capability PART of the specs whether the drives are SAS or SATA? I can do this on a cheap desktop motherboard but you cannot on a server board with getting a HBA? Maybe you're using a different bios, or maybe you're hotswapping like for like drives that coincidentally work in your situation or something, but ... In BIOS, I have the option to enable/disable SATA port 0, 1,2,3. If the port is enabled and nothing connected, it throws and error during POST. If something is connected, it's identified as a 1TB or whatever drive, and presented to the OS as c1t1d1 or whatever. Later if I disconnect that drive while the OS is running, the OS still thinks c1t1d1 exists, but if I try to access it, I'll get an IO error. If it were truly hot plug, then the OS should get a drive disconnected signal, and c1t1d1 should not exist anymore. As is the case with a USB drive or firewire. And if I stick a different drive on that line, it *may* behave correctly, but I won't trust it. The chances of correct behavior are improved if you're swapping the drive for another of the same model, but I seem to recall bad behavior when swapping a 1T drive for a 2T drive or 500G. If it works for you, I say great. I only feel comfortable with using something that truly supports hotplug, or rebooting. PS. I accidentally fried a mac laptop SATA hard drive once, when I thought the computer was off (it was actually asleep). I was transplanting the hard drive from one laptop to another, due to a broken screen. The moment I pulled the SATA hard drive out of the old laptop, while it was still powered on, that hard drive was fried and never again functional. Wouldn't spin up anymore (or if it spun up, it was never recognized as a drive ever again.) Back at that time, I looked it up, and found as you said, hot plug is supposedly incorporated into the SATA spec. But it's poorly implemented, rarely tested, and not to be relied upon, unless you have another layer beneath it (such as truly hotplug capable HBA) which will make up for the deficiencies of the SATA hotplug. Basically, if your product was *intended* to be used for hotplug, and it advertises itself as hotplug, then it's hotplug. But if you're just blindly assuming all motherboard internal built-in SATA adapters support it ... you're taking your data into your own hands. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Stefan Müller-Wilken [mailto:stefan.mueller-wil...@acando.de] Sent: Wednesday, October 09, 2013 3:34 AM ... so just to make sure I got that right: for the T3-1 you'd suggest to dump the PCIe HBA and go with the on-board SAS controller - even for production use? And for the DL380, where this is no option, place each drive in a single drive logical volum? And then, in both cases, build RAIDZ pools on top of that? Anyone a clue why our local Oracle dealer's tech support didn't even mention this option? I think there's really only one answer for you: You have to understand the limitations of each option, and make up your own mind based on what you care about. You should definitely find a solution that handles the raid in ZFS instead of in hardware. Due to data integrity capabilities of ZFS. If you are forced to use a single volume raid-1 or raid-0 in hardware, be aware that your disks are likely not portable to another system, unless the other system has the same type of HBA. For this reason, if your HBA supports JBOD, then JBOD is preferred. With or without HBA, consider the hotplug capabilities of your system, and the red blinking light capabilities. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Edward Ned Harvey (openindiana) If it were truly hot plug, then the OS should get a drive disconnected signal, and c1t1d1 should not exist anymore. As is the case with a USB drive or firewire. (or hotplug capable HBA) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Stefan Müller-Wilken [mailto:stefan.mueller-wil...@acando.de] Question now: what would you recommend? 8 LSI RAID0 LVs under ZFS, 8 drives under LSI RAID-5, or switch to an Oracle certified controller with JBOD mode (which one??)? Does it make sense to go for Soft RAID anyway, with that configuration? As you said, JBOD is best. But if you're running on HBA's that don't support JBOD, you need to use a bunch of RAID-0 volumes with single disks in them. You said something about can't do failover, hotspare, or something, with the RAID-0 volumes. I suggest you look at that again. There is nothing preventing you from failing over to hotspares etc, just because you're using RAID-0 on your HBA. If you can get the controller that supports JBOD, then great. The main advantage of that is portability of those drives to other systems. When an HBA requires making drives into RAID-0, the HBA necessarily stores metadata in something similar to a partition scheme, before presenting the remainder of the disk to the OS. This has the side effect of making the drives unusable if you happen to require taking them out and attaching to a different system. (In the event of an HBA failure for example.) The JBOD vs RAID-0 advantage is both simplicity of management, and better portability. But if you have to live with RAID-0, it's completely possible. JBOD or RAID-0 with ZFS... Both ways are *far* better than hardware controlled RAID-5. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Peter Tribble [mailto:peter.trib...@gmail.com] This is where I get a little confused. Why an extra HBA at all - what's wrong with using the SAS ports on the system board? My usual reason for using an HBA is for hot plug, and red blinking lights on failed drives. If you have degraded redundancy, the last thing you want is to accidentally remove the wrong drive. I find, with the motherboard built-in sas controller, you usually need to power off in order to swap a drive, and you need to devfsadm -Cv after coming up, in order to make the new drive available. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Opinions on LSI RAID vs. ZFS
From: Saso Kiselkov [mailto:skiselkov...@gmail.com] I find, with the motherboard built-in sas controller, you usually need to power off in order to swap a drive, and you need to devfsadm -Cv after coming up, in order to make the new drive available. What kind of a messed up SAS controller would that be? I've yet to see this happen to me with any on-board SAS controller (in fact, all on-board SAS controllers that I've encountered so far were LSI chips, just placed on the motherboard instead on a plug-in board). The on-board SAS controller isn't externally facing, and not expecting hotswap drives. Drive detection is performed by BIOS during POST. Dell optiplex, precision, and poweredge. I have all 3 sitting in my basement right now, with SATA cables dangling out the chassis through holes I cut, in order to attach removable disks externally. It all works but I have to power off the host while connecting/disconnecting drives. The poweredge server also has an HBA (basic SAS) with hotswappable drives on the front. These do not require any poweroff to swap the drives, as the name suggests. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] VMware
From: James Relph [mailto:ja...@themacplace.co.uk] Sent: Monday, August 12, 2013 4:47 PM No, we're not getting any ping loss, that's the thing. The network looks entirely faultless. We've run pings for 24 hours with no ping loss. Yeah, I swore you said you had ping loss before - but if not - I don't think ping alone is sufficient. You have to find the error counters on the LACP interfaces. Everybody everywhere seems to blindly assume LACP works reliably, but to me, simply saying the term LACP is a red flag. It's extremely temperamental, and the resultant behavior is exactly as you've described. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Using OI and zfs from a windows machine
From: Robbie Crash [mailto:sardonic.smi...@gmail.com] I always see this bandied about. Following the Oracle documentation on how to join OI to a domain for the built in CIFS serving has worked for me, flawlessly on 10 different OI installations. Every time I hear about people with issues with it, they're always using Samba. What benefit does using an additional module have over the built in CIFS server? I wasted ages, following oracle documentation, configuring sharesmb, getting crap for results. Adding a little more info - One of the applications I support is Acronis TrueImage. A ghost-like whole-system backup software you run inside windows. When using sharesmb, I was able to access the server just fine to create backups, but that's only half the requirement. You need to also be able to boot from the rescue media, which is a dos-like environment, and then *read* the backup. For this, sharesmb was simply failing. Not making the server visible on the network. If you don't care about that, there's a good chance your sharesmb experience would be better than mine. Is it just that people want to use smb.conf instead of managing shares through zfs set sharesmb? That wasn't a goal, but it turned out to be a positive side-effect. I found, if I were to manage permissions via solaris ACL's, the implementation is completely different from anything else - so I would have to learn yet another platform specific idiosyncrasy. Which I would be willing to do, but don't have any specific desire to do. When I finally gave up and went for samba, I was happy that I didn't need to learn anything. I seem to recall, there is a simple option to turn off ACL's and use posix permissions instead. Which is significantly simpler, if you don't need ACL granularity. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Using OI and zfs from a windows machine
From: Harry Putnam [mailto:rea...@newsguy.com] When I did run OI, I had recurring problems with various permissions type problems when accessing the zfs server from windows. Technically supported. I personally wouldn't trust ... and your complaint adds validation to my superstition. I personally configure samba to not join any windows domain. Just have local user accounts inside the samba box. Manage all the perms with chown/chmod. Keep it simple and well within the beaten path. Working from windows across a gigabit network on a zfs server. Didn't seem to work well... I'm talking about running adobe tools on files from windows when the files are on a zfs server across the network. The problems I remember best were sloth and freezups on the windows machine. So, all I really want to know right now is if that should be entirely possible .. and if there are users here who do that daily that can vouch for it. I do it regularly. The trick is configuring samba. I found the built-in cifs server to be crap, and switched to the actual samba service. The reason it's tricky to configure is cuz ... OI doesn't come with any sample config files, and try as I might, I simply can't get SWAT to run on OI. So I created a linux machine, ran swat on the linux machine, and then copied over the smb.conf file and destroyed linux. I'll happily provide my documentation, on how I configured my server, if desired. Or maybe gigabit just isn't a big enough pipe for heavy graphics work? With a 1Gbit link, you should get performance very comparable to a single locally attached disk. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] running VirtualBox headless
From: Jan Owoc [mailto:jso...@gmail.com] 1) in the case of synchronous=off, I had an order of magnitude speed increase for writes (i.e. during the install :-) ). Are there general guidelines for what kinds of workloads are safe to have the ZIL disabled? You mean sync=disabled. ;-) It goes like this: ZFS will aggregate writes into transactions, which will then be atomically flushed to disk. So under no circumstance, does corrupt data get written to disk, nor in the wrong temporal order. In the event of an ungraceful system interruption (kernel panic, power outage, etc) the on-disk data is always self-consistent, a snapshot of data that at some point was valid. The risk is this: In order to aggregate those transactions, there's a period of time (5sec) where data might be at risk because it exists solely in RAM that has yet to be flushed to disk. As long as you can accept that risk, then you are safe to run with sync=disabled. But it's not always easy to determine if you can safely accept that risk. Read on: Generally, if your system is a standalone system, that doesn't have any stateful clients watching it, then you're safe to disable sync. But for example, if you have a bunch of NFS clients, and your ZFS server is a NFS server... In the event your NFS server crashes and rewinds 5 sec, your NFS clients will all remember what they were doing at the time of the crash, and you'll have an inconsistent state between your server and clients. You can remedy this situation by restarting all the NFS clients at the same time you restart the NFS server. The point is: You need to first of all ask yourself if you can accept 5 sec of potential data loss in the event of a crash, and you need to secondly ask yourself, what services are being provided by the server, and if there is any stateful client that will notice or care, if the server were to suddenly crash and rewind as much as 5 sec. 2) I was able to get the machine to autostart using SMF, but it was a lot of work; should I expand that section of the wiki with my findings? I have no idea how to get it to autoshutdown when the machine is shutting down. There is a section on the wiki discussing SMF. You should look at vboxsvc and simplesmf. Jim writes and maintains vboxsvc, and I write and maintain simplesmf. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] running VirtualBox headless
From: Jan Owoc [mailto:jso...@gmail.com] I'm running a home NAS using OI 151a7 server (vs. desktop). I was thinking of running Ubuntu Server in a virtual machine on OI, ideally configured to startup/shutdown when OI starts/shuts down. I can connect a monitor to the machine, but it generally should run headless. I found there is a very helpful entry on the wiki [1] describing most of the steps. Has anyone successfully installed VirtualBox on a headless OI and have any other tips before I dive in? [1] http://wiki.openindiana.org/oi/7.2+VirtualBox I do this quite a bit. In fact, I'm a contributor to the wiki page you mentioned. A long time ago, I tried doing this on server and found I couldn't get VirtualBox to install, or to run, or something like that. So I switched to desktop and everything was smooth. It was a long time ago and I don't recall details. It could simply be that I hadn't yet figured out how to run headless, so I required X and gnome, and a desktop, and couldn't get that working. Now that I think of it, I think that's the most likely truth. Hopefully it will go smoothly for you. I'm also the author of simplesmf, which is mentioned on that wiki page. I have some updated scripts that haven't been committed to the project yet, but I think they're an improvement, so if interested, let me know and I'll provide. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS read speed(iSCSI)
From: Heinrich van Riel [mailto:heinrich.vanr...@gmail.com] I will post my findings, but might take some time to fix the network in time and they will have to deal with 1Gbps for the storage. The request is to run ~90 VMs on 8 servers connected. With 90 VM's on 8 servers, being served ZFS iscsi storage by 4x 1Gb ethernet in LACP, you're really not going to care about any one VM being able to go above 1Gbit. Because it's going to be so busy all the time, that the 4 LACP bonded ports will actually be saturated. I think your machines are going to be slow. I normally plan for 1Gbit per VM, in order to be comparable with a simple laptop. You're going to have a lot of random IO. I'll strongly suggest you switch to mirrors instead of raidz. And I'll strongly suggest adding a log device. Your cache device might not give you benefit because 90 VM's might be too large to yield a substantial cache hit, but the log device almost certainly will benefit, as all your iscsi writes will be sync writes. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS read speed(iSCSI)
From: Jim Klimov [mailto:jimkli...@cos.ru] With 90 VM's on 8 servers, being served ZFS iscsi storage by 4x 1Gb ethernet in LACP, you're really not going to care about any one VM being able to go above 1Gbit. Because it's going to be so busy all the time, that the 4 LACP bonded ports will actually be saturated. I think your machines are going to be slow. I normally plan for 1Gbit per VM, in order to be comparable with a simple laptop. You're going to have a lot of random IO. I'll strongly suggest you switch to mirrors instead of raidz. I'll leave your practical knowledge in higher regard than my theoretical hunches, but I believe typical PCs (including VDI desktops) don't do much disk IO after they've loaded the OS or a requested application. Agreed, disk is mostly idle except when booting or launching apps. Some apps write to disk, such as internet browsing caching stuff, and MS Office constantly hitting the PST or OST file, and Word/Excel autosave, etc. But there are 90 of them. So even idle time multiplied by 90 is no longer idle time. And most likely, when they *do* get used, a whole bunch of them will get used at the same time. (20 students all browsing the internet in between classes, or 20 students all doing homework between 5pm and 9pm, but they're all asleep from 4am to 6am, so all 90 instances are idle during that time...) And from what I read, if his 8 VM servers would contact the ZFS storage box with requests to many more targets, then on average all NICs will likely get their share of work, for one connection or another, even as part of LACP trunks (which may be easier to manage than VLAN-based MPxIO, with its separate benefits however). Right?.. Yup, with 8 VM servers, each having 11 VM guests, even if each server has a single 1Gb link to the 4x LACP storage server, I expect the 4x LACP links will see pretty heavy and well distributed usage. It might seem like a good idea to use dedup as well, Not if you care at all about performance, or usability. So here's my 2c, but they may be wrong ;) :-) I guess the one thing I can still think to add here is this: If the 90 VM's all originated as clones of a single system, and the deviation *from* that original system remains minimal, then the ARC L2ARC cache will do wonders. Because when the first VM requests the boot blocks and the OS blocks, and all the blocks to launch Internet Explorer (or whatever app)... Those things can get served from cache to satisfy the 89 subsequent requests to do the same actions. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] shell script mutex locking or process signaling
From: Laurent Blume [mailto:laurent...@elanor.org] That's why I pointed out mine are in /tmp or /var/run - tmpfs, so it's guaranteed cleared on reboot, graceful or not :-) The behavior of clearing out /tmp is a configurable feature, and the default varies by OS. Some OSes clear it on every reboot, some clear it according to an aging schedule, some don't clear it. I'm not sure what the default is for solaris / openindiana. For the problem at hand (SMF service) obviously, it doesn't matter what the linux defaults are. ;-) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] shell script mutex locking or process signaling
From: Gary Mills [mailto:gary_mi...@fastmail.fm] SMF is actually well documented, but you do have to jump around from man page to man page. Start with `man smf'. There are also lots of examples to follow, both of manifests and methods. They are all text files. Ok, so here's a quasi-recent example of a difficulty I've encountered trying to use SMF. I have a service, which is configured for a single instance. I then wanted to break it into individual instances, svc:foo and svc:bar. I looked at examples, I did what I thought made sense, tried to import it, and SMF puked on the xml, saying something generic like invalid configuration. If SMF is actually well documented and good to work with, I need something to (a) guide me creating good xml, and (b) validate the xml, letting me know if something's wrong, and how to fix it. Last I knew, there are a bunch of good html editors out there. You start typing something, and based on context, the tool knows what's valid to use in the spot where you are working, so it will suggest and autocomplete tags, and properties inside of tags. If you start typing li when you're not inside ul or ol they throw warning signs at you. Last I knew, there isn't any such thing as a DTD aware XML editor. So when I sit down and start typing XML, I have no idea what tags belong in the place where I'm typing. In my example above, it turned out, I was putting the exec method before the dependency name, or something like that. Order matters in XML, and I got it wrong just by trying to read and copy some example into my service manifest. To debug, I forget the exact process I followed, but I recall it being painfully iterative and manual. I'd recommend using the facilities of SMF, rather than trying to do it all outside of SMF. These facilities are extensive and complete. You say SMF has capabilities that make this all go away. But I read man smf and I don't see it there. I don't know what to look for, and I'm not going to read the DTD from top to bottom, hoping to find something that fits the bill. Have you considered the contract facility? It's used internally by SMF, but you can use it elsewhere as well. The shell commands are ctrun(1), ctstat(1), ctwatch(1), and pkill(1). There may be a solution there, but I'm not very familiar with solaris contract subsystem - it looks like you define the behavior of one process, and you use another process to monitor it. If this is correct, it would make a very convoluted solution - A SMF service launches the start method, and while it's running, the same service launches the stop or refresh method ... Rather than executing the method directly, in each situation, utilizing contracts, the method would actually start a contract to monitor a sub-method for executing the start, stop, or refresh. And if the user (or system) is repeating calls to start/stop/refresh, each of these instances need to be made aware of each other, so the later method calls signal the earlier ones that they should terminate their contracts ... *blah* In any event, for the problem at hand in this thread, I used the easy solution: Script starts. Script uses mkdir $LOCKDIR which is /tmp/something Script chugs along, and at select moments, checks for the existence of $BREAKLOCKDIR, which is /tmp/somethingelse If a script starts and fails to get lock on LOCKDIR, then the script locks BREAKLOCKDIR and starts polling for the non-existence of LOCKDIR. LOCKDIR is a signal that a script is already running. BREAKLOCKDIR is a signal that a later process wants to steal lock. If LOCKDIR becomes stale (for example, system power cycled while lock existed) any script that *has* lock guarantees to release it in less than 60 seconds. So if the BREAKLOCK script detects LOCK exists for more than 60 seconds, assume it's a stale lock and steal it forcibly. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] shell script mutex locking or process signaling
From: Udo Grabowski (IMK) [mailto:udo.grabow...@kit.edu] If you don't understand SMF (which is a bit clumsy, but indeed has all what you want), use this little generator, it will do the hard work for you: http://sgpit.com/smf/ That's a pretty cool generator. But apparently, I understand SMF as well as it does. Because it allows you to specify start stop refresh methods ... But if those happen to take *time* to complete, and you start enabling/disabling/refreshing the service while the previous instances of enable/disable/refresh (start/stop/refresh) are still running, then multiple concurrent instances of those things get launched. Which is how I got where I am now. In any event, I went forward and implemented with mkdir locking. It works, and usually will, except in the event of ungraceful things. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] shell script mutex locking or process signaling
Here's the problem I'm trying to solve: SMF service is configured to launch things like VirtualBox during startup / shutdown. This startup process can take a long time (10, 20 minutes) so if there's a problem of any kind for any reason, you might do things like enable and disable or refresh the service ... but each time the script gets launched, it's not aware of the previous one. I'm really looking for a good way to allow multiple instances of some shell script to signal each other. Let the earlier instances die and the later instances take over control. This problem has two parts. Atomicity of signaling operations (acquiring / releasing mutex, etc), and inter-process signaling. (Let the later instance signal the earlier instance that it should die.) It seems easy enough, as long as you have a good atomic operation for locking, the process that acquired lock can write its PID into a file, and later instances can kill that PID. I'm looking around, and not finding any great answers. So far, using mkdir, it's easy to see there exists a way to do mutex locking, and you could easily write your PID into the subdir that was just created; unfortunately, the problem is when a script gets killed, leaving the stale lock. So I'm looking for something better than mkdir to use for locking. I see there are a bunch of C constructs available ... mutex_init, etc. Surely there must be a wrapper application around this kind of thing, right? Thanks, everyone, for any suggestions you may offer. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Moving ZFS pool between systems ?
From: Svavar Örn Eysteinsson [mailto:sva...@fiton.is] So my question, is it as simple as zpool export datapool on the orginal machine and zpool import on the new one ? Should be that simple. Yes. As long as you're not doing any proprietary hardware RAID on the disks, and you ensure the zpool capabilities on the new system are at least as good as the zpool capabilities on the old system. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] shell script mutex locking or process signaling
From: Gary Mills [mailto:gary_mi...@fastmail.fm] On Thu, May 30, 2013 at 02:15:12PM +, Edward Ned Harvey (openindiana) wrote: Here's the problem I'm trying to solve: SMF service is configured to launch things like VirtualBox during startup / shutdown. This startup process can take a long time (10, 20 minutes) so if there's a problem of any kind for any reason, you might do things like enable and disable or refresh the service ... but each time the script gets launched, it's not aware of the previous one. Are you saying that the method script itself might manipulate the service, or that the system admin might do it? SMF has ways to prevent multiple instances of the method from running and to make enable and disable requests synchronous. Can you do what you want within SMF, or does the method script had to do all the process manipulation? Not familiar with that, but, if you can name some XML tags, or things to search for, I'd be willing to take a look. I can say this: I'm biased to believe SMF is complex and confusing, mostly due to lack of understandable documentation surrounding the XML, and lack of any good XML tools to edit, validate, or otherwise produce good XML. So I'm biased to expect it's probably better to use locking within the script. Also, the desired behavior is NOT to prevent multiple scripts from running. The desired behavior is to let the later instances supersede the previous instances, letting the previous instances die gracefully (without putting SMF into mainetnance or anything like that.) So a certain amount of shell script hackery is required, no matter what. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] shell script mutex locking or process signaling
From: Aneurin Price [mailto:aneurin.pr...@gmail.com] I don't know about pure/POSIX shell, but at least bash and ksh support noclobber, which should do the trick. I've been using the following idiom for some time without problems: I read somewhere (possibly obsolete, and also can't relocate) that noclobber on solaris doesn't always work right. And in any event, I don't see the advantage of using noclobber on a file, versus using mkdir. (Mkdir has the advantage of definitely being atomic, regardless of platform.) Either way, you could still have stale locks left around (even with trap), after a kill -KILL or a kernel, storage, or power failure. It would be *really* nice to have a locking mechanism that exists solely in ram, so it would go away and automatically release locks, in the event of a system ungraceful reboot. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] replacing an open solaris box
From: Kristoff Bonne [mailto:krist...@skypro.be] Is there a list of applications that does work OK? As said, my requirements are not that special: thunderbird, firefox, virtual box. thunderbird and firefox are included in the standard OS package repositories. So yes, those work. Virtualbox definitely works too. But I recommend reading the openindiana wiki page on virtualbox, because, although it will *work* simply, there are a lot of ways you can optimize it beyond the default. http://wiki.openindiana.org/oi/7.+Virtualization No, there isn't a list of software that works. It's a fully functional OS, with thousands of packages in the standard distribution repositories, plus thousands of other products available from vendors who happen to support openindiana (or solaris) including VirtualBox. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] export pool at shutdown and import at boot up
I have some external storage which isn't super reliable. If I don't export it before shutdown, it will often cause the boot to fail, as it doesn't mount properly. I would like to make that pool automatically export during shutdown, or somehow flag it so the system will never try to import or mount it during bootup. My best idea is to write a smf wrapper around zpool export and zpool import. Is that the best way, or is there a better way, like setting some sort of pool property, or sticking a flag in some defaults file, or tweaking the zfs cache file or something? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] export pool at shutdown and import at boot up
From: Bill Sommerfeld [mailto:sommerf...@alum.mit.edu] On 05/27/13 17:25, Edward Ned Harvey (openindiana) wrote: I have some external storage which isn't super reliable. If I don't export it before shutdown, it will often cause the boot to fail, as it doesn't mount properly. I would like to make that pool automatically export during shutdown, or somehow flag it so the system will never try to import or mount it during bootup. ... is there a better way, like setting some sort of pool property, or sticking a flag in some defaults file, or tweaking the zfs cache file or something? See the -c cachefile option to zpool import and the cachefile pool property; pools that aren't listed in the default system pool cache file aren't auto-imported at boot time. Looks like this should do it: zpool set cachefile=none externalpool Or if I wanted to do it manually, zpool import -c none externalpool ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] replacing an open solaris box
From: Nikola M. [mailto:minik...@gmail.com] On 05/24/13 02:46 PM, Edward Ned Harvey (openindiana) wrote: From: Kristoff Bonne [mailto:krist...@skypro.be] Sent: Friday, May 24, 2013 5:27 AM What Operating System would you now advice for a personal workstation for simple work (telnet/ssh to other devices, perl, firefox, thunderbird, ...). Also important is the ability to run VirtualBox. For personal workstation, openindiana desktop. And it is called illumos ;) Ummm... Illumos is the open source fork of the core of the opensolaris operating system. But illumos itself is not an installable or usable OS. It needs to be packaged into a distribution. There are several distributions based on illumos - The one that most closely resembles opensolaris is called openindiana. Openindiana has both a full GUI desktop version, and a server version. (I always use the desktop version, even on servers, because I use VirtualBox, which has difficult to satisfy dependencies if installed on the server version.) There are some other distros - most notably, SmartOS. This one is specifically designed for specific functions in the server room, and if you don't know it, I suggest checking their webpage for more info. For upgrading from opensolaris to openindiana, please see here: http://wiki.openindiana.org/oi/OpenIndiana+Wiki+Home Type in opensolaris into the search bar. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] replacing an open solaris box
From: Kristoff Bonne [mailto:krist...@skypro.be] Sent: Friday, May 24, 2013 5:27 AM What Operating System would you now advice for a personal workstation for simple work (telnet/ssh to other devices, perl, firefox, thunderbird, ...). Also important is the ability to run VirtualBox. For personal workstation, openindiana desktop. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] 2.88Mb floppy image file.
From: Jonathan Adams [mailto:t12nsloo...@gmail.com] Hi ... I was recently trying to create a 2.88Mb floppy file to try and BIOS upgrade a Dell computer that wouldn't boot anything graphical. I know I'm probably going about this wrong, but I cannot seem to use fdformat, or mkfs -F pcfs on a file, in order to burn the file to a cdrom I'm reasonably certain if you check dell's website again, you'll find a better download (especially if you have support; call them up and they'll provide you with an iso) but if you really want to boot from a floppy image burned to a CD... mkisofs -J -r -o somefile.iso -b floppy.img ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Problem with Dell iDRAC
From: Kris Henriksson [mailto:kt...@cornell.edu] I've been having a long-standing issue with using the iDRAC on my server with OpenIndiana, and I thought before giving up completely, I could try asking the mailing list. The problem is that the iDRAC is completely inaccessible while OpenIndiana is running. The system is a Dell PowerEdge T610, with Dell iDRAC 6 Express. If I run Linux on it, the iDRAC can be accessed just fine, and before OpenIndiana has started booting, I can access it, but with OI running it is inaccessible. The DRAC shares a physical network port with the OS, but has a separate MAC address and independent network traffic. I've seen that before. Got over it... I forget precisely how ... I think I had to go into BIOS, and disable the first NIC. This makes it inaccessible to the OS, but not inaccessible to the iDRAC. Then, obviously, connect two separate ethernet cables. One for the management interface, and one for the OS. Also, if you ping monitor the OS and the iDRAC... It is normal to see the iDRAC disappear at certain moments during the boot process. So don't assume it's failed the moment ping begins to fail. Wait for the OS to come up completely, and perhaps a minute longer. Oh yeah ... This might be separate, but the built-in broadcom NIC was never stable in solaris 10 / opensolaris. Symptom was a weird sort of black-screen lockup while still responding to ping, which occurred approx once a week. We made this problem go away by buying an add-on Intel server NIC. So it's distinctly possible, that the actual iDRAC solution is to disable both the broadcom ethernets in BIOS (use only by iDRAC) and only use the Intel NIC in the OS. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Problem with Dell iDRAC
Oh. I see now, Rich's solution about updated driver. That sounds better. ;-) -Original Message- From: Edward Ned Harvey (openindiana) Sent: Tuesday, April 23, 2013 8:34 AM To: openindiana-discuss@openindiana.org Subject: RE: [OpenIndiana-discuss] Problem with Dell iDRAC From: Kris Henriksson [mailto:kt...@cornell.edu] I've been having a long-standing issue with using the iDRAC on my server with OpenIndiana, and I thought before giving up completely, I could try asking the mailing list. The problem is that the iDRAC is completely inaccessible while OpenIndiana is running. The system is a Dell PowerEdge T610, with Dell iDRAC 6 Express. If I run Linux on it, the iDRAC can be accessed just fine, and before OpenIndiana has started booting, I can access it, but with OI running it is inaccessible. The DRAC shares a physical network port with the OS, but has a separate MAC address and independent network traffic. I've seen that before. Got over it... I forget precisely how ... I think I had to go into BIOS, and disable the first NIC. This makes it inaccessible to the OS, but not inaccessible to the iDRAC. Then, obviously, connect two separate ethernet cables. One for the management interface, and one for the OS. Also, if you ping monitor the OS and the iDRAC... It is normal to see the iDRAC disappear at certain moments during the boot process. So don't assume it's failed the moment ping begins to fail. Wait for the OS to come up completely, and perhaps a minute longer. Oh yeah ... This might be separate, but the built-in broadcom NIC was never stable in solaris 10 / opensolaris. Symptom was a weird sort of black-screen lockup while still responding to ping, which occurred approx once a week. We made this problem go away by buying an add-on Intel server NIC. So it's distinctly possible, that the actual iDRAC solution is to disable both the broadcom ethernets in BIOS (use only by iDRAC) and only use the Intel NIC in the OS. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Pounding on well trod ground..
From: Harry Putnam [mailto:rea...@newsguy.com] Sorry to go over what must have been already covered many times but I dropped out of OI participation for a good long while. Hopefully someone will feel kindly disposed and post a brief outline of how to go from zero to running a vb vm of current OI on a win7 64bit host. Download and install virtualbox. Download the openindiana desktop iso. In virtualbox, click Add, new machine, select solaris (I think) or perhaps solaris 64bit. Boot from the iso. Simple as that. You'll have some self explanatory prompts, such as creating a virtual guest hard drive, and clicking on Install and choosing the language. The only part that isn't obvious or self-expanatory: After the guest is installed, it's advisable to click Install Guest Additions in the host window. This will virtually insert the guest additions CD rom into guest, which should automatically launch. Once I have a running install, is the syntax for the packages still something like REPONAME/dev? sudo pkg search subversion sudo pkg install subversion (or whatever) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Jay Heyl [mailto:j...@frelled.us] Ah, that makes much more sense. Thanks for the clarification. Now that you put it that way I have to wonder how I ever came under the impression it was any other way. I've gotten lost in the numerous mis-communications of this thread, but just to make sure there is no confusion: If you have a mirror (or any vdev with redundancy, radizN) you issue a read, then normally only one side of the mirror gets read (not the redundant copies.) If the cksum fails, then redundant copies are read successively, until a successful cksum is found (still, some redundant copies might not have been read.) If you perform a scrub, then all copies of all information are read and validated. The advantage of reading only one side of the mirror is performance. If one device is busy satisfying one read request, then the other sides of the mirror are available to satisfy other read requests. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Timothy Coalson [mailto:tsc...@mst.edu] Did you also compare the probability of bit errors causing data loss without a complete pool failure? 2-way mirrors, when one device completely dies, have no redundancy on that data, and the copy that remains must be perfect or some data will be lost. I had to think about this comment for a little while to understand what you were saying, but I think I got it. I'm going to rephrase your question: If one device in a 2-way mirror becomes unavailable, then the remaining device has no redundancy. So if a bit error is encountered on the (now non-redundant) device, then it's an uncorrectable error. Question is, did I calculate that probability? Answer is, I think so. Modelling the probability of drive failure (either complete failure or data loss) is very complex and non-linear. Also dependent on the specific model of drive in question, and the graphs are typically not available. So what I did was to start with some MTBDL graphs that I assumed to be typical, and then assume every data-loss event meant complete drive failure. Already I'm simplifying the model beyond reality, but the simplification focuses on worst case, and treats every bit error as complete drive failure. This is why I say I think so, to answer your question. Then, I didn't want to embark on a mathematician's journey of derivatives and integrals over some non-linear failure rate graphs, so I linearized... I forget now (it was like 4-6 years ago) but I would have likely seen that drives were unlikely to fail in the first 2 years, and about 50% likely to fail after 3 years, and nearly certain to fail after 5 years, so I would have likely modeled that as a linearly increasing probability of failure rate up to 4 years, where it's assumed 100% failure rate at 4 years. Yes, this modeling introduces inaccuracy, but that inaccuracy is in the noise. Maybe in the first 2 years, I'm 25% off in my estimates to the positive, and after 4 years I'm 25% off in the negative, or something like that. But when the results show 10^-17 probability for one configuration and 10^-19 probability for a different configuration, then the 25% error is irrelevant. It's easy to see which configuration is more probable to fail, and it's also easy to see they're both well within acceptable limits for most purposes (especially if you have good backups.) Also, as for time to resilver, I'm guessing that depends largely on where bottlenecks are (it has to read effectively all of the remaining disks in the vdev either way, but can do so in parallel, so ideally it could be the same speed), No. The big factor for resilver time is (a) the number of operations that need to be performed, and (b) the number of operations per second. If you have one big vdev making up a pool, then the number of operations to be performed is equal to the number of objects in the pool. The number of operations per second is limited by the worst case random seek time for any device in the pool. If you have an all-SSD pool, then it's equal to a single disk performance. If you have an all-HDD pool, then with increasing number of devices in your vdev, you approach 50% of the IOPS of a single device. If your pool is broken down into a bunch of smaller vdev's, Let's say N mirrors that are all 2-way. Then the number of operations to resilver the degraded mirror is 1/N of the total objects in the pool. And the number of operations per second is equal to the performance of a single disk. So the resilver time in the big vdev raidz is 2N times longer than the resilver time for the mirror. As you mentioned, other activity in the pool can further reduce the number of operations per second. If you have N mirrors, then the probability of the other activity affecting the degraded mirror is 1/N. Whereas, with a single big vdev, you guessed it, all other activity is guaranteed to affect the resilvering vdev. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Jay Heyl [mailto:j...@frelled.us] I now realize you're talking about 8 separate 2-disk mirrors organized into a pool. mirror x1 y1 mirror x2 y2 mirror x3 y3... Yup. That's normal, and the only way. I also realize that almost every discussion I've seen online concerning mirrors proposes organizing the drives in the way I was thinking about it Hmmm... What alternative are you thinking of?There is no alternative. This also starts to make a lot more sense. Confused the hell out of me the first three times I read it. I'm going to have to ponder this a bit more as my thinking has been heavily influenced by the more conventional mirror arrangement. What are you talking about? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] vdev reliability was: Recommendations for fast storage
From: Sebastian Gabler [mailto:sequoiamo...@gmx.net] AFAIK, a bit error in Parity or stripe data can be specifically dangerous when it is raised during resilvering, and there is only one layer of redundancy left. You're saying error in parity, but that's because you're thinking of raidz, which I don't usually use. You really mean error in redundant copy, and the only risk, as you've identified, is the error in the *last* redundant copy. The answer to this is: You *do* scrub every week or two, don't you?You should. I do not think that zfs will have better resilience against rot of parity data than conventional RAID. That's incorrect, because conventional raid cannot scrub proactively. Sure, if you have a pool with only one level of redundancy, and the bit error creeped in between the most recent scrub and the present failure time, then that's a problem, and zfs cannot protect you against it. This is, by definition, simultaneous failure of all redundant copies of the data. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] vdev reliability was: Recommendations for fast storage
From: Jim Klimov [mailto:jimkli...@cos.ru] Well, thanks to checksums we can know which variant of userdata is correct, and thanks to parities we can verify which bytes are wrong in a particular block. If there's relatively few such bytes, it is theoretically possible to brute-force match values into the wrong bytes and recalculate checksums. So if a broken range is on the order of 30-40 bytes (which someone said is typical for a CRC error and HDD returning uncertain data) you have a chance of recovering the block in a few days if lucky ;) This is a very compute-intensive task; I proposed this idea half a year ago on the zfs list (I had unrecoverable errors on raidz2 made of 4 data disks and 2 parity disks, meaning corruptions on 3 or more drives, but not necessarily whole-sector corruptions) and tried to take known byte values from different components at known bad byte offsets and put them into the puzzle. Complexity (size of recursive iteration) grows very quickly even if we only have about 5 values to match (unlike 256 in full recovery above), and we estimated that for a 4096 byte block it would take Earth's compute resources longer than the lifetime of the universe to do the full search and recovery. So such approach is really limited to just a few dozen broken bytes. But it is possible :) I think you're misplacing a decimal, confusing bits for bytes, and mixing up exponents. Cuz you're way off. With merely 70 unknown *bits* that is, less than 10 bytes, you'll need a 3-letter government agency devoting all its computational resources to the problem for a few years. Furthermore, when you find a matching cksum, you haven't found the correct data yet. You'll need to exhaustively search the entire space requiring 2^70 operations, find all the matches (there will be a lot) and from those matches, choose the one you think is right. Even with merely 70 unknown bits, and a 32-bit cksum (the default in zfs fletcher-4) you will have 2^38 (that is, 256 billion) results that produce the right cksum. You'll have to rely on your knowledge of the jpg file or txt file or whatever, to choose which one of the 256 billion cksum-passing-results is *actually* the right result. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] vdev reliability was: Recommendations for fast storage
From: Timothy Coalson [mailto:tsc...@mst.edu] As for what I said about resilver speed, I had not accounted for the fact that data reads on a raid-z2 component device would be significantly shorter than for the same data on 2-way mirrors. Depending on whether you are using enormous block sizes, or whether your data is allocated extremely linearly in the way scrub/resilver reads it, this could be the limiting factor on platter drives due to seek times, and make raid-z2 take much longer to resilver. I fear I was thinking of raid-z2 in terms of raid6. I'm not sure if you misunderstand something, or if I misunderstand what you're saying, but ... Even if you are using enormous block sizes, it's actually just enormous *max* block sizes. If you write a 1 byte file (very slowly such that no write accumulation can occur) then ZFS only writes a 1 byte file, into a block. So the enormous block sizes only come into play when you're writing large amounts of data ... And when you're writing large amounts of data, you're likely to simply span multiple sequential blocks anyway. So all-in-all, the blocksize is rarely very important. There are some situations where it matters, but ... All this is a tangent. The real thing I'm addressing here, is, you said scrub/resilver progresses extremely linearly. This is unfortunately, about as wrong as it can be. In actuality, scrub / resilver proceed in approximately temporal order, which in the typical situation of a long-time server with frequent creation destruction of snapshots, results in approximately random disk order. Here's the evidence I observed: I had a ZFS server running in production for about 2 years, and a disk failed. I had measured previously, on this server, each disk sustains 1 Gbit/sec sequentially. With 1T disks, linearly resilvering the entire disk including empty space, it should take about 2 hrs to resilver. But ZFS doesn't resilver the whole disk; it only resilvers used space. This would be great, if your pool is mostly empty, or if it was disk linearly ordered. But it actually took 12 hours to resilver that disk. I went to zfs-discuss and discussed. Learned about the temporal ordering. Got my explanation how resilvering just the used portions could take several times longer than resilvering the whole disk. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com] Raid-Z indeed does stripe data across all leaf vdevs (minus parity) and does so by splitting the logical block up into equally sized portions. Jay, there you have it. You asked why use mirrors, and you said you would use raidz2 or raidz3 unless cpu overhead is too much. I recommended using mirrors and avoiding raidzN, and here is the answer why. If you have 16 disks arranged in 8x mirrors, versus 10 disks in raidz2 which stripes across 8 disks plus 2 parity disks, then the serial write of each configuration is about the same; that is, 8x the sustained write speed of a single device. But if you have two or more parallel sequential read threads, then the sequential read speed of the mirrors will be 16x while the raidz2 is only 8x. The mirror configuration can do 8x random write while the raidz2 is only 1x. And the mirror can do 16x random read while the raidz2 is only 1x. In the case you care about the least, they're equal. In the case you care about most, the mirror configuration is 16x faster. You also said the raidz2 will offer more protection against failure, because you can survive any two disk failures (but no more.) I would argue this is incorrect (I've done the probability analysis before). Mostly because the resilver time in the mirror configuration is 8x to 16x faster (there's 1/8 as much data to resilver, and IOPS is limited by a single disk, not the worst of several disks, which introduces another factor up to 2x, increasing the 8x as high as 16x), so the smaller resilver window means lower probability of concurrent failures on the critical vdev. We're talking about 12 hours versus 1 week, actual result of my machines in production. Also, while it's possible to fault the pool with only 2 failures in the mirror configuration, the probability is against that happening. The first disk failure probability is 1/16 for each disk ... And then if you have a 2nd concurrent failure, there's a 14/15 probability that it occurs on a separately independent (safe) mirror. The 3rd concurrent failure 12/14 chance of being safe. The 4th concurrent failure 10/13 chance of being safe. Etc. The mirror configuration can probably withstand a higher number of failures, and also the resilver window for each failure is smaller. When you look at the total probability of pool failure, they were both like 10^-17 or something like that. In other words, we're splitting hairs but as long as we are, we might as well point out that they're both about the same. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage (OpenIndiana-discuss Digest, Vol 33, Issue 20)
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] It would be difficult to believe that 10Gbit Ethernet offers better bandwidth than 56Gbit Infiniband (the current offering). The swiching model is quite similar. The main reason why IB offers better latency is a better HBA hardware interface and a specialized stack. 5X is 5X. Put another way, the reason infiniband is so much higher throughput and lower latency than ethernet is because the switching (at the physical layer) is completely different from ethernet, and messages are passed directly from user-level to user-level on remote system ram via RDMA, bypassing the OSI layer model and other kernel overhead. I read a paper from vmware, where they implemented RDMA over ethernet and doubled the speed of vmotion (but still not as fast as infiniband, by like 4x.) Beside the bypassing of OSI layers and kernel latency, IB latency is lower because Ethernet switches use store-and-forward buffering managed by the backplane in the switch, in which a sender sends a packet to a buffer on the switch, which then pushes it through the backplane, and finally to another buffer on the destination. IB uses cross-bar, or cut-through switching, in which the sending host channel adapter signals the destination address to the switch, then waits for the channel to be opened. Once the channel is opened, it stays open, and the switch in between is nothing but signal amplification (as well as additional virtual lanes for congestion management, and other functions). The sender writes directly to RAM on the destination via RDMA, no buffering in between. Bypassing the OSI layer model. Hence much lower latency. IB also has native link aggregation into data-striped lanes, hence the 1x, 2x, 4x, 16x designations, and the 40Gbit specifications. Something which is quasi-possible in ethernet via LACP, but not as good and not the same. IB guarantees packets delivered in the right order, with native congestion control as compared to ethernet which may drop packets and TCP must detect and retransmit... Ethernet includes a lot of support for IP addressing, and variable link speeds (some 10Gbit, 10/100, 1G etc) and all of this asynchronous. For these reasons, IB is not a suitable replacement for IP communications done on ethernet, with a lot of variable peer-to-peer and broadcast traffic. IB is designed for networks where systems want to establish connections to other systems, and those connections remain mostly statically connected. Primarily clustering storage networks. Not primarily TCP/IP. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com] If you are IOPS constrained, then yes, raid-zn will be slower, simply because any read needs to hit all data drives in the stripe. Saso, I would expect you to know the answer to this question, probably: I have heard that raidz is more similar to raid-1e than raid-5. Meaning, when you write data to raidz, it doesn't get striped across all devices in the raidz vdev... Rather, two copies of the data get written to any of the available devices in the raidz. Can you confirm? If the behavior is to stripe across all the devices in the raidz, then the raidz iops really can't exceed that of a single device, because you have to wait for every device to respond before you have a complete block of data. But if it's more like raid-1e and individual devices can read independently of each other, then at least theoretically, the raidz with n-devices in it could return iops performance on-par with n-times a single disk. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Jay Heyl [mailto:j...@frelled.us] So I'm just assuming you're going to build a pool out of SSD's, mirrored, perhaps even 3-way mirrors. No cache/log devices. All the ram you can fit into the system. What would be the logic behind mirrored SSD arrays? With spinning platters the mirrors improve performance by allowing the fastest of the mirrors to respond to a particular command to be the one that defines throughput. When you read from a mirror, ZFS doesn't read the same data from both sides of the mirror simultaneously and let them race, wasting bus memory bandwidth to attempt gaining smaller latency. If you have a single thread doing serial reads, I also have no cause to believe that zfs reads stripes from multiple sides of the mirror to accelerate - rather, it relies on the striping across multiple mirrors or vdev's. But if you have multiple threads requesting independent random read operations that are on the same mirror, I have measured the results that you get very nearly n-times a single disk random read performance by using a n-way mirror and at least n or 2n independent random read threads. There is no latency due to head movement or waiting for the proper spot on the disc to rotate under the heads. Nothing, including ZFS, has such an in-depth knowledge of the inner drive geometry as to know how long is necessary for the rotational latency to come around. Also, rotational latency is almost nothing compared to head seek. For this reason, short-stroking makes a big difference, when you have a data usage pattern that can easily be confined to a small number of adjacent tracks. I believe, if you use a HDD for log device, it's aware of itself and does short-stroking, but I don't actually know. Also, this is really a completely separate subject. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Mehmet Erol Sanliturk [mailto:m.e.sanlit...@gmail.com] SSD units are very vulnerable to power cuts during work up to complete failure which they can not be used any more to complete loss of data . If there are any junky drives out there that fail so dramatically, those are junky and the exception. Just imagine how foolish the engineers would have to be, Power loss? I didn't think of that... Complete drive failure in power loss is acceptable behavior. Definitely an inaccurate generalization about SSD's. There is nothing inherent about flash memory as compared to magnetic material, that would cause such a thing. I repeat: I'm not saying there's no such thing as a SSD that has such a problem. I'm saying if there is, it's junk. And you can safely assume any good drive doesn't have that problem. MLC ( Multi-Level Cell ) SSD units have a short life time if they are continuously written ( they are more suitable to write once ( in a limited number of writes sense ) - read many ) . It's a fact that NAND has a finite number of write cycles, and it gets slower to write, the more times it's been re-written. It is also a fact that when SSD's were first introduced to the commodity market about 11 years ago, that they failed quickly due to OSes (windows) continually writing the same sectors over and over. But manufacturers have been long since aware of this problem, and solved it by overprovisioning and wear-leveling. Similar to ZFS copy-on-write, which has the ability to logically address some blocks and secretly re-map them to different sectors behind the scenes... SSD's with wear-leveling secretly remap sectors during writes. SSD units may fail due to write wearing in an unexpected time , making them very unreliable for mission critical works . Every page has a write counter, which is used to predict failure. A very predictable, and very much *not* unexpected time. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Recommendations for fast storage
From: Wim van den Berge [mailto:w...@vandenberge.us] multiple 10Gb uplinks However the next system is going to be a little different. It needs to be the absolute fastest iSCSI target we can create/afford. So I'm just assuming you're going to build a pool out of SSD's, mirrored, perhaps even 3-way mirrors. No cache/log devices. All the ram you can fit into the system. You've been using 10G ether so far. Expensive, not too bad. I'm going to recommend looking into infiniband instead. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZfS migration scenario including zvols
From: Sebastian Gabler [mailto:sequoiamo...@gmx.net] Sent: Saturday, April 13, 2013 11:38 AM - zfs send mainbranch@1 -R /pool2/mainbranch.dmp for each nfs, iscsi, smb It is advisable, if possible, to create a new zpool with your new tmp storage, and zfs send | zfs receive. (Don't store a zfs send data stream in a file.) Because the zfs receive does all the cksumming, and will detect and notify you in the event of a data transmission problem. But if all you have is a network store, or something else that you can't format with zfs for some reason... So be it. You do what you have to do. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZfS migration scenario including zvols
From: Sebastian Gabler [mailto:sequoiamo...@gmx.net] Be careful, there are lots of ways to screw this up. Fortunately, not many of them result in data loss or anything like that. Just bad behavior. Specifically, I thinking, you want to send including properties, to preserve the nfs iscsi properties, etc. But unfortunately, you don't want to send with properties, which also preserves the mountpoint. So the receiving filesystem fails to mount, and then ... I don't know what. Perhaps you have to export both pools and re-import, or manually tweak the mountpoints on the recipient filesystem... Um ... Also, most likely since you have nested filesystems, each of them has been able to snapshot separately from each other. The list of snapshots is probably not 100% identical on all filesystems, which means, if you try to do the recursive+recursive incremental send from the root, it will fail. The *easiest* thing to do is simply take a recursive snapshot now, and send a recursive stream based on that, but you'll give up snapshot history. The *best* thing would be to handle each filestem/zvol individually, so you can do the recursive incremental (without going recursive filesystems.) I recommend looking at the list of zfs properties on your datasets, and think about which ones you want to preserve, and which ones you don't. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] How to configure OI for proper daylight savings adjustments.
From: Robin Axelsson [mailto:gu99r...@student.chalmers.se] Is there anyone who has got this working properly? I confirm that it works correctly out of the box, for EST/EDT (New York time), with this line in /etc/default/init TZ=US/Eastern this line in /etc/rtc_config zone_info=US/Eastern ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Added a page on the wiki about using Xvnc for multiple simultaneous graphical logins.
From: Hans J. Albertsson [mailto:hans.j.alberts...@branneriet.se] http://wiki.openindiana.org/oi/4.7+Remote+Graphical+Login:+Using+Xvnc+ and+gdm That's interesting ... So correct me if I'm wrong, you VNC to your server on 5900, and you get a login prompt as if you were sitting down in front of the physical console. Every new connection to 5900 gets a new login prompt, so you don't have to consume more ports (5901, 5902, etc). But if your VNC client crashes for some reason, what happens? I'm guessing the VNC session dies and all the programs inside it die too. There is no session manager; when you login to 5900 you can't reconnect to a previously existing session. Right? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Dhcp woes, (was Re: Anyone using OpenIndiana in production?)
From: Michael Stapleton [mailto:michael.staple...@techsologic.com] The Dhcp files can be stored on NFS and used by multiple servers. It defeats the purpose of redundant dhcp servers if you make them both dependent on non-redundant storage. But that's only tangential. The upshot of what you're saying is that the config files are in some directory, and they could be versioned and distributed just like I'm presently doing with svn. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Anyone using OpenIndiana in production?
From: Gerry Weaver [mailto:ger...@compvia.com] I have been checking out OpenIndiana as a possible file server and KVM host. I have several sites using OI for samba, dns, and VirtualBox. As far as stability is concerned, yes it's stable. But it's not amazingly mature (see below). Others have said that some of the packages are sort of not greatly maintained, and lack configuration. I needed to configure bind and samba from scratch by hand, which was a huge pain. I actually built some linux VM's just to get their default config files and copy them over to OI and destroy the linux guests. I would like to run dhcp from OI, rather than running it from a linux guest. But I never got dhcp working on OI, so I still have the linux dhcp guests just for this purpose. I recommend the latest 4.1.x version of VirtualBox, which was super awesome and stable. Unfortunately I updated to 4.2.x, and it's still very buggy. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Dhcp woes, (was Re: Anyone using OpenIndiana in production?)
From: Jim Klimov [mailto:jimkli...@cos.ru] Well, at the time I documented this page, it worked (at oi_151a5 timeframe, I believe): http://wiki.openindiana.org/oi/Using+host- only+networking+to+get+from+build+zones+and+test+VMs+to+the+Intern et Yikes. Thanks for writing that up. But .. I have two servers, and some reservations, and the config files are stored in subversion. So at present, when I create a reservation for a new machine, I just edit one file (duplicate modify a line) and commit. Svn post-commit hook then verifies integrity, and deploys to the mirror. Very easy. I can't believe there's this new dhcpmgr, dhcpconfig, several config files, some binfiles that I presume I can't read or edit... 3+ packages that need to be installed... I'm sure it's very powerful, and maybe can even do what I'm doing *even* better. But that's very complicated and more than I want to invest. It's no wonder I didn't get it working. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] 3737 days of uptime
From: dormitionsk...@hotmail.com [mailto:dormitionsk...@hotmail.com] Sent: Tuesday, March 19, 2013 11:42 PM A Sun Solaris machine was shut down last week in Hungary, I think, after 3737 days of uptime. Below are links to the article and video. Warning: It might bring a tear to your eye... It would only bring a tear to my eye, because of how foolishly irresponsible that is. 3737 days of uptime means 10 years of never applying security patches and bugfixes. Whenever people are proud of a really long uptime, it's a sign of a bad sysadmin. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] IO Stalls
From: Grant Albitz [mailto:gr...@schultztechnology.com] I have been chasing an issue with my openindiana host for some time. It is stable for a few weeks but then I find it rebooted with no kernel errors. This sounds like a driver issue. I've had similar problems on an R510 or R520, or something similar, I forget exactly which. That system was unstable and rebooted or crashed about once a week. The most effective thing we did was to add on an Intel NIC and stop using the built-in broadcom NIC, but even so, it still crashed about once every 3 weeks or a month. I've had bad luck in general running solaris or openindiana on dell servers. (No problems on dell workstations.) Other people have had good luck running on dell servers. If possible, eliminate a PERC and use a plain old stupid SATA/SAS card. Sorry I don't have anyhting more helpful to offer. Good luck. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] DLNA server
From: Michelle Knight [mailto:miche...@msknight.com] It looks like I'm going to have to install something on the server to publish the video directories in DLNA, which I've got no experience of. This is what I do: sudo pkg set-publisher -p http://pkg.openindiana.org/sfe-encumbered sudo pkg set-publisher -p http://pkg.openindiana.org/sfe sudo pkg -y install serviio As others have mentioned, configuring a profile specifically compatible with you device is crucial. With my TV, so far I've only been able to play, pause, FFx1 FFx2. Any other button (REW, or FFx4) will halt the TV, requiring a power cycle. It's very annoying. I haven't put in the effort to fix a profile for my TV. As Hans said, you need to go to http://forum.serviio.org/ and basically just figure it out. And hopefully share profiles so other people with the same device can benefit. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] multiple IP addresses, same NIC
From: Robbie Crash [mailto:sardonic.smi...@gmail.com] If you're not accessing clients on the remote 192.168.1.0 subnet, why are you adding the second network? Why are you not handling this on the router instead of the client? Static routes on a client are bad mojo. It's the router's job to route, let it do that. All you should need to do is tell the router to route all traffic for 192.168.10.0/24 to use whatever the VPN interface is. The problem is at the remote side. If they have a huge internal corporate network that happens to include 192.168.10.x/24 and 192.168.1.x/24 ... When I VPN to them and my LAN is 192.168.1.x/24, I have a subnet that overlaps with their pre-existing subnet. They can't route traffic to me without breaking one of their internal subnets. The most elegant solution (aside from renumbering my network) would be NAT. It would be nice to eliminate 192.168.2.x/24 from my house, and configure the firewall so when I send a packet to the VPN network, let my source IP be NAT'd to 192.168.2.x/24. However, I have not yet had any luck configuring pfsense to NAT the traffic first and then route it, NAT'd across the VPN. At present, I have two problems I'm trying to solve in parallel. If I can either make OI behave as expected, then I can use the multiple-subnets-on-a-single-LAN solution and move forward. Or if I can get the firewall to NAT as expected, then I can scrap the multiple-subnets idea and move forward. The issue here sounds like since the OI box already knows that it has a route to 192.168.10.10 over its default route, it doesn't need to use the secondary IP. That's not quite correct. Sure, if I didn't add the static route 192.168.10.x via 192.168.2.1, then OI would try to reach 192.168.10.x via the default gateway. But that's irrelevant - By adding the 192.168.2.1 route, the system does in fact know it's supposed to reach 192.168.10.x via 192.168.2.1. The evidence is when a packet leaves the NIC destined for 192.168.10.x, its destination MAC corresponds to 192.168.2.1. But unfortunately, the source IP is wrong. If you can't configure the router, PCI NICs are $9 these days, and that'll work for sure. I might do that. The main obstacle is knowing I would have to wait for it to arrive, and it will require downtime on the VM host, to solve something that should be solvable in software. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] multiple IP addresses, same NIC
From: Robbie Crash [mailto:sardonic.smi...@gmail.com] The problem is at the remote side. If they have a huge internal corporate network that happens to include 192.168.10.x/24 and 192.168.1.x/24 ... When I VPN to them and my LAN is 192.168.1.x/24, I have a subnet that overlaps with their pre-existing subnet. They can't route traffic to me without breaking one of their internal subnets. I get that, but in your original email you stated you don't need to access their 192.168.1.0 subnet, unless all their traffic routes over that subnet internally you shouldn't have an issue. Their side will see the request coming from your VPN point, and will send traffic there and your VPN server will send it to the proper client. No, there's something you seem to be missing. I'm making up the details in this email, but the concept stands: They have 192.168.1.x/24 in Buffalo. 192.168.10.x/24 in Syracuse. 10.10.10.x/24 in Toronto. 172.16.14.x/24 in Vancouver... and a hundred other sites. They have all their routers configured to support this. If somebody at any site sends traffic to 192.168.1.x/24, their routers know the traffic is routed to Buffalo. So if I get inside the network, using 192.168.1.x/24 in Boston, all those other sites can't talk to me, or can't talk to Buffalo. I have to either use a subnet that doesn't conflict, or I have to NAT and virtually use a subnet that doesn't conflict. If I actually use the new subnet, 192.168.2.x/24 which isn't used anywhere else in the company, then all traffic is routable to and from my network, which is good. But if I virtuallly NAT my 192.168.1.x/24 network, making my traffic appear as 192.168.2.x/24 as far as the company's concerned ... Then I have no way to access their 192.168.1.x/24 because my systems will think the destination is local and hence not use the router. I am saying that I'm ok using this NAT solution to avoid the need to renumber my systems. I'm only blocking the traffic from my local 192.168.1.x to the company's 192.168.1.x (and vice-versa) but I don't care about connecting to anything in the company's 192.168.1.x range. Make sense now?;-) What IP address are you receiving from the VPN server? Their VPN server doesn't assign an IP address. This is not a mobile client VPN we're talking about, it's a site-to-site VPN. Firewall to firewall. Corporate home office. And I'm the IT guy. So I can do whatever I want and support whatever I want. The question is what do I want. Well, I have about a dozen or two systems in my house, including a mobile vpn server, site-to-site vpn's with other companies, two windows active directory domains, a few dns zones, and a virtualization infrastructure. While I *can* renumber, it'll cost me about a day's work. So the NAT solution is attractive. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] multiple IP addresses, same NIC
From: Robbie Crash [mailto:sardonic.smi...@gmail.com] This is something that should be handled at the router, not at the client in software. It turns out, I reached a conclusion with the NAT possibility. In pfsense, you can NAT traffic before it goes across an openvpn, but you can't NAT traffic before it goes across an ipsec vpn. (Just a limitation of their software, until at least the next release, when they *might* add that feature.) At present, in pfsense, I would need one firewall to establish the VPN connection, and another firewall to NAT from that subnet to my internal subnet. Thanks to Jim's idea of VNIC, I have a solution in client-side software. So this thread really doesn't need to continue... But it was an interesting and fun exercise to talk about. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] multiple IP addresses, same NIC
From: Reginald Beardsley [mailto:pulask...@yahoo.com] Sent: Wednesday, March 06, 2013 3:34 PM How about summarizing on the wiki? I'm in favor, but in this case, I don't think there's anything to summarize ... Here is the summary: sudo dladm create-vnic -l e1000g0 vnic0 sudo ipadm create-addr -T static -a 192.168.2.100/24 vnic0/v4static sudo route -p add 192.168.10.0/24 192.168.2.1 And voila. New IP address and new MAC address on the same wire with my pre-existing LAN subnet, with a static route. Actually ... I believe all these commands are already on the wiki. I think I actually *got* these answers from the wiki, once I knew what to look for. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] It was so easy... Re: xvnc-inetd only accepting one connection in total?? Huh??
From: Hans J. Albertsson [mailto:hans.j.alberts...@branneriet.se] And now I realise I haven't understood a thing... Nothing works. All new connection attempts are met with a request for a vnc password... But there is no password configured... Are you opposed to putting different users on different ports? It has been a long time since I thought it was worthwhile to mess around with gdm/xdm/etc. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] multiple IP addresses, same NIC
From: Doug Hughes [mailto:d...@will.to] 2) explicitly set the route for 192.168.10.x : route add 192.168.10.0/mask 192.168.2.1 That's what I'm saying I have already done. I set the default route to 192.168.1.1, and I set a static route, 192.168.10.x/24 via 192.168.2.1. The route is in effect, as evidenced: For simplicity, let's say 192.168.1.1 has MAC 11:11:11:11:11:11 and let's say 192.168.2.1 has mac 22:22:22:22:22:22. When I ping something on the internet, I see a packet go out my NIC, source IP 192.168.1.100, destination MAC 11:11:11:11:11:11 and destination IP 8.8.8.8. It all works, I get a ping response. When I ping 192.168.2.1 directly, I see a packet go out my NIC, source IP 192.168.2.100, destination MAC 22:22:22:22:22:22 and destination IP 192.168.2.1. It all works, I get a ping response. When I ping something on the other end of the VPN, I see a packet go out of my NIC, source IP 192.168.1.100, destination MAC 22:22:22:22:22:22 and destination IP 192.168.10.10 (or whatever.) The firewall drops the packet, because duh, the source IP isn't in the same subnet as the firewall. I am also exploring the NAT option, assuming I'm not going to be able to resolve the above problem. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] xvnc-inetd only accepting one connection in total?? Huh??
Here is how I do it: If I'm not misunderstanding, I think this is what you want. https://code.google.com/p/simplesmf/ There is a simplesmf service to enable vnc-server. It starts automatically at startup, and shuts down automatically at shutdown... You configure user1 to always have a VNC session running on display :1 and user2 on :2 and whatever. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] RAIDZ performance
From: Reginald Beardsley [mailto:pulask...@yahoo.com] For 3 disk RAIDZ1 I get 189-199 MB/s and 179 MB/s for 4 disk RAIDZ1. But for 4 disk RAIDZ2 I get 109-118 MB/s. I expected some loss in performance, but not that much. These are measured writing 64 GB of /dev/zero to the RAIDZ filesystem from a console window. For a 256 GB file I got 111 MB/s writing and 279 MB/s reading 4 disk RAIDZ2. System is unloaded w/ 4 GB of ECC DRAM using a 4-way mirror rpool in the s0 slices w/ RAIDZ in the s1 slices. Does the drop in RAIDZ2 write performance correspond to other people's experience? If so, why such a large hit? top suggests the system has spare CPU capacity (30-50% idle). For my current needs things look good, but I'd like to understand why a bit better. Or if there is some tuning I should do. In my experience, for sequential single threaded throughput, I consider each disk to be 1Gbit/sec, and they tend to add up exactly as you would wish. That is to say, For a 3-disk raidz1, I expect 2Gbit For a 4-disk raidz2, I expect 2Gbit Your measurements do indeed seem to be significantly off. I don't really have any suggestions why. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] weird packet garbling problem
From: Roel_D [mailto:openindi...@out-side.nl] I use ASA5505's always. I never had this problem with solaris 1011, but those run on sun hardware. I also have solaris 10 on an old HP DL340 with bge's also without problem. And OI 1.57 on VMware also without the problems you describe. I use the cisco VPN windows client. I'm thinking I'll need to dig into the exact revisions of IOS that are running, and stuff like that. You don't happen to know yours offhand, do you? Is your cisco the defaultgateway for your servers? Otherwise i think OI sees a packet comming in from (for example) 172.18.12.12 which is your vpn ip-address, it then can't figure out where to reply to and the messages start bouncing around??? Oh, no. If it were something that obvious, it would be totally broken and easy to find the problem. Remember, I can connect to the OI machines, and all the other machines in the network. Routing is not confused. I can do any type of traffic to the non-OI machines, no problem, and I can even do certain types of traffic to/from the OI machines, without problem. (I can type ssh commands ad nauseum, never have a problem because it's a single character at a time.) The only problem is certain types of traffic, going to the OI machines. (If I paste a command on the ssh prompt, sometimes it works, and sometimes the response packet on my laptop says incoming packet garbled on decryption or whatever.) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Zfs import fails
From: Ram Chander [mailto:ramqu...@gmail.com] I had a zpool thats exported on another system and when i try to import, it fails. Any idea how to recover ? Start by proving there isn't some other problem. Import the pool again on the same system that did the export. Assuming you can successfully import, capture a zpool status and then export again and get back to your new system... Show us the zpool status for the pool while it's functional in the old system. Your error message said missing device. (one or more devices currently unavailable). Make sure you devfsadm -Cv on the new system, and make sure the new disks are all appearing. Your pool isn't based on partitions or slices, is it? If so, you'll have to specify the devices manually. (I think it's zpool import -d) What type of disk controllers do you have in the new old systems? Many HBA's will occupy some space for their config meta data on the drives, transparently to the OS. This makes the drives incompatible with other systems, other than similar compatible HBA's. Ideally, you'll have simple braindead SATA/SAS controllers on both the source and destination machines. Because they won't add any custom data to the drives; they just present the drive to the OS, simple as that. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] weird packet garbling problem
From: James Carlson [mailto:carls...@workingcode.com] Which of the many Broadcom drivers is this? If it's bnx, try editing /kernel/drv/bnx.conf, and uncommenting and changing the checksum= line to set it to all zeros. One of the systems is using bge, and the other is using bnx. I notice there is no checksum= line in the bge.conf file ... It seems, if you want to disable checksum offload with bge, you have to edit source. Will give this a try and see how it goes. Thanks for the suggestion. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] weird packet garbling problem
From: Edward Ned Harvey (openindiana) [mailto:openindi...@nedharvey.com] I am having a really hard time coming up with a plausible explanation for this, other than some kind of kernel bug with openindiana... Found a new clue, which is totally unbelievable, yet totally enlightening. The firewall is a cisco asa 5505. We have both anyconnect ipsec vpn for mobile clients enabled. I tried them both, and got the same result for both (thinking maybe it was a problem with the vpn client.) My home firewall is a pfsense device. So today, I enabled point-to-point ipsec vpn between my home and work. Now I can sit at home with my laptop, use the laptop VPN client to connect direct to the failing OI hosts... Or I can disconnect my laptop vpn client, enable the firewall vpn, and then ssh to the failing OI machines. When I use the IPSec or Anyconnect VPN client, I have the problem. When I enable the site-to-site VPN, I don't have the problem. So I've reached two conclusions: -1- The problem is related to the Cisco ASA firewall, and mobile VPN connectivity. -2- The problem is related to OpenIndiana. (No problems connecting to other ssh/vnc systems in the office, linux, mac, or windows.) I have not yet tried using a mac/linux VPN client. Might learn something there too. I don't know why there would be a bad interaction between the OI machines and the Cisco ASA. But there is. I think I'll probably try to lay it on Cisco support next. They'll probably tell me to upgrade IOS. Even though this is a relatively current stable version ... the most stable latest bugfix version of the almost-latest line, last July. The one they recommended as the most stable one we're recommending for now. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Time slider ready for prime time?
From: Stefan Müller-Wilken [mailto:stefan.mueller-wil...@acando.de] can anyone comment on this? Will time slider work reliably enough on a developer workstation to use it in real development work? Yup, it's awesome. The only catch that I'm aware of is immediately after OS installation, you need to become root, change the root pass, and change it back to what you want. (This is a known bug, but hasn't been fixed yet). After that, you can launch the Time Slider config utility, and it works great. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Auto snapshots
From: Edward Ned Harvey (openindiana) [mailto:openindi...@nedharvey.com] In 151a5, I had to fiddle with password before time-slider GUI config worked. But the present release is 151a7, so maybe that's a resolved issue now. Either way, the workaround was trivial, so I would say you can safely call the issue resolved. Yesterday, I built a 151a5 server and upgraded to 151a7. The bug is still present. A few months ago, I did file a bug report, so eventually I'm sure it'll be resolved. But like I said, the workaround is trivial, so, ... whatever. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] weird packet garbling problem
I am having a really hard time coming up with a plausible explanation for this, other than some kind of kernel bug with openindiana... I have two systems in the office, Dell PowerEdge SC 1435 (Embedded Broadcom 5721 NIC) and Dell PowerEdge 2950 (Embedded Broadcom 5708 NIC), both running OI 151a5 or newer. Inside the office, everything works fine. But when I go home and VPN into the office, I ssh or vnc to these two boxes, and I get packet garbling and retransmissions and dropped connections, but *only* on these two machines, and *only* from the vpn connection, and *only* for certain specific types of traffic. Here's an example: I'm on an ssh prompt. I can type in commands all day and night, it always works fine when I'm typing. (One character at a time, typing via keyboard, I can hold down a key and completely fill the screen, 4320 keystrokes no problem.) But I'm following a procedure, so I'm also pasting commands. Sometimes when I paste commands, I get PuTTY Fatal Error: Incoming packet was garbled on decryption. (Disconnected.) It's not a MTU thing. (First of all, I checked all the MTU's looking for any problems) but a better clue is that I can paste the same command over and over and over (obviously the same packet size each time) and it only fails after the Nth repititon. For testing purposes, I ssh into box, and I paste this command: echo hello there buddy, whatcha doing /dev/null Obviously nowhere near the MTU size. I keep pasting it over and over, until connection fails. Count how many times I can successfully paste it before failure. Repeat. My results were: 5, 0, 12, 0, 9, 0. Deterministic inputs, nondeterministic outputs. (Well, probably deterministic, but not determined by the inputs that I'm controlling). I have a workaround. I ssh into some other machine in the network, and then ssh to the machine in question. Infinite success. Paste the above command until my fingers are tired and I'm satisfied that there's no problem. The problem *only* happens when I ssh (or vnc or whatever) directly to the machine from the vpn client. And obviously, it doesn't happen when I ssh to some other machine from the vpn client (and then ssh to the machine in question). The only difference between the LAN traffic which works perfectly, and the VPN traffic that's having a problem, is the fact that the VPN traffic needs to go through a router. It's not the router that's messing up the traffic, or else I would expect to see the same problem on a different machine. It's hard for me to imagine a driver problem that will only affect traffic that requires a router. But maybe. Maybe there's a broadcom driver problem, that doesn't affect LAN traffic but does affect traffic going through a router. Anyway, I'm at a loss for how to debug further. I suppose I could create a dummy network with a really simple router in between, and see if the problem persists, using a different router and no VPN. Also, if I do that, I'll be able to wireshark both sides, to see what happens. For now, on my VPN, I can only wireshark the OI side of the equation; can't wireshark the traffic at my VPN endpoint. I also have one Intel NIC I can stick into one of the machines. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] weird packet garbling problem
From: Jim Klimov [mailto:jimkli...@cos.ru] Basically, the workaround for us was to enable this line in /etc/system: set ip:dohwcksum = 0 Oh well, thanks for the suggestion... Unfortunately, didn't make any difference... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] weird packet garbling problem
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Unless TCP is offloaded from the kernel (so that checksums are in an adaptor card), it is exceedingly difficult for wrong data to pass TCP's checksumming and get passed up to the socket that SSH uses. In the one packet capture that I was able to do so far, running wireshark on the OI side of the connection, I saw all the checksums were 0's and triggering the bad checksum flag in wireshark. When I googled around, some wireshark FAQ said if all your bad checksums are 0's and only on sent packets (not received packets) then it means the checksumming is happening at a layer lower than wireshark, and you can ignore the errors, by toggling a checkbox. This fit the description, so I did it. A lower layer might be either kernel or hardware; I don't know. The second thing I saw was ... For every time I sent/received a single character (typing on my keyboard) the OI system received an ssh packet, sent an ssh packet (terminal echo), and sent an ssh ACK for the first packet. But then when I pasted some command that caused the failure ... the OI machine saw the packet come in, and get repeated like 100 times all within 1ms of each other. Then it spewed out like 100 responses, all with about 1ms, and wireshark flagged as error, like 100 duplicate ACKs, that were again, all within like 1ms. And the TCP FIN. ... I know my laptop didn't send like 100 times. (First of all, it happened too quickly for that to be possible, second, it only happens when connected to an OI server, third, it happens for both ssh and vnc traffic.) It's the sort of thing that makes me suspect some sort of faulty interrupt or interrupt handler. The more I think about it, the more I am suspicious of the broadcom NIC driver. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss