Hi all,
Just wondered if anyone could confirm something here, I've got a Oi151a7 box
with a Qlogic FC card and I'm having a few issues with mpxio multipathing on
there (ie. it won't do it). One thing I wondered though, should this be using
the qlc driver? The card seems to work, but I'm
I haven't had any luck googling for this, except for using IPF.
Is it possible to bind OI (151a8) CIFS to one interface - I have a
server which sits on an Internet link and my LAN with two ethernet
interfaces and I only want CIFS to listen on the LAN interface.
Possible? (I know I can
On 2013-07-23 16:00, Carl Brewer wrote:
I haven't had any luck googling for this, except for using IPF.
Is it possible to bind OI (151a8) CIFS to one interface - I have a
server which sits on an Internet link and my LAN with two ethernet
interfaces and I only want CIFS to listen on the LAN
On 07/20/2013 19:38, Alexander Pyhalov wrote:
Hello, all.
4) Note, not all currently available perl5.10 modules are packaged for
perl5.16. Missing modules include:
- module/sun-solaris
- authen-pam
- subversion
Just wanted to note that these three modules are available for perl 5.16
in
On 23/07/2013, at 4:00 PM, Carl Brewer c...@bl.echidna.id.au wrote:
I haven't had any luck googling for this, except for using IPF.
Is it possible to bind OI (151a8) CIFS to one interface - I have a server
which sits on an Internet link and my LAN with two ethernet interfaces and I
Been awhile, thank you all for the recommendations. It took six days to
restore all the data from backups! The LSI MegaRaid 9260-8i doesn't
support JBOD, so I had to restore using the one large disk volume again.
I will be shopping for a new raid controller card that supports JBOD and
will
I'm looking at purchasing a iSCSI disk raid array and using OI to mount
the iSCSI volume and format with ZFS. Anyone out there doing this? I'd
be interested in successes or issues that I should look out for.
Thanks
--
C. J. Keist Email: cj.ke...@colostate.edu
Systems
I'm trying to understand what would be necessary to add LZ4 support to my OI
installation without creating too much of a headache. Is this something
available in hipster, or should I look into creating a fresh illumos-gate build
along with the userspace tools and grub? There's very little
Yes, it's part of hipster. Just keep in mind that hipster is a work in
progress and things may break but later be fixed. I've been on hipster
since the beginning and I am impressed with the speed that it's
evolving. It has been reasonably stable but some issues remain.
On 07/23/2013 01:09
I'd go with an HBA and present ZFS with the raw disks. Save yourself a couple of
bucks and a bunch of potential hassle. I've had good luck with the LSI 9200-82
(external) and 9210-8i (internal). Both are PCIe 2.0. The 9207-8i and 9207-8e
are the PCIe-3.0 equivalents but I have not tested them. 85
Has the DCE cache problem been fixed in OI? the smartos folks a proposed
solution but even their work appears to have stopped.
i am still running this bit of wisdom every three minutes on 151a1 but i am
reaching traffic levels where the cache still reaches up to three hundred
megabytes.
Understanding correctly:
You are trying to buy a iSCSI array ex like a Dell 3200,
http://www.dell.com/us/business/p/powervault-iscsi-arrays or similar
with a hardware raid + oldstyle unsecure filesystem on it and share this volume
via iSCSI as a single disk vdev to OI and format with ZFS??
On OI, a LSI 9211 reflashed to IT mode + an LSI expander is the best solution.
In future the LSI 9207 is a better option (comes with IT firmware out of the
box)
But this controller is quite new. While OmniOS supports it already I have heard
of problems with current OI 151a7
Am 23.07.2013 um
i would start with smartos but any modern live boot image will due.
j.
On Jul 8, 2013, at 11:36 AM, CJ Keist cj.ke...@colostate.edu wrote:
Thank you. It is ver 28 ZFS. this is a supermicro chasis with 36 interal
drives so i cannot move another system to mount the zfs pools. I could try
Yes, that would be the case. I'm looking at EMC iSCSI storage unit. So
it's pretty much the same case that I have with our current OI system
using Raid controller to present one single volume to create a ZFS file
system on correct? There is the potential of loosing the entire ZFS file
system
What exactly does this mean? What is leaked space? Is it repaired by
virtue of zdb walking the filesystem, etc?
# zdb -b pool
Traversing all blocks to verify nothing leaked ...
Error counts:
errno count
leaked space: vdev 0, offset 0x7cfeea00, size 3072
leaked space: vdev 0, offset
16 matches
Mail list logo