On Aug 8, 2014, at 11:08 AM, Eric Sproul espr...@omniti.com wrote:
On Fri, Aug 8, 2014 at 10:43 AM, Richard Elling
richard.ell...@richardelling.com wrote:
sftp is there
+1. FTP is more and more a legacy/niche service. The world has moved
on and there are better ways of distributing
I scrub weekly already :)
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
Second this. The DC S3700 are very good.
Okay, so far so good. 100MB s3700 came today. threw it in the tank pool
as log device, and set sync=standard. Re-ran crystaldiskmark and get
93MB/sec writes. Given that reads are running 106MB/sec, I think it's
time to call this a win... Thanks all
OTOH, the only reason to use ESXi in the first place would be if you want
to virtualize something that doesnt fit well onto Illumos KVM.
Maybe I am demonstrating my ignorance here, but another might be: a decent
GUI. This is one of the places where kvm falls down badly, IMO. You seem
have
On 5/28/2014 10:34 AM, Saso Kiselkov wrote:
On 5/28/14, 4:08 PM, Schweiss, Chip wrote:
Intel has several SATA SSDs with proper super-cap protected caches that
make good log devices.
I'd recommend looking at a Intel DC S3700. The 200 GB or 400 GB
varieties promise ~3 4k random write IOPS
(merging comments to Saso and Jim)
I don't think I mentioned my environment - if not, my apologies. This is
a SOHO/Lab setup, so things like zeusram are non-starters. The basic
network infrastructure is gigabit, so iSCSI ZIL would suck badly, I
suspect. As far as over-provisioning the 840PRO,
It looks to me like Sa¨o's design is active/standby failover. Zpool
import on the standby should obtain a clean transaction group as long
as the originally active system is still not using the pool. The
result would be similar to the power fail situation.
As long as the right fencing is
Assuming you have real SAS devices in the pool, not SATA with interposers,
you can use SCSI reservations. This can block the other host from
accessing a pool you are about to take over.
sg3_utils has utilities for managing SCSI reservations.
The data pool is in fact all SAS. 8 1TB
So I've been running with sync=disabled on my vsphere NFS datastore. I've
been willing to do so because I have a big-ass UPS, and do hourly backups.
But, I'm thinking of going to an active/passive connection to my JBOD,
using Saso's blog post on zfs zfs-create.blogspot.com. Here's why I think
cked.) If you switch manually from host A to B, all is well, since
Zfs does not depend on sync writes ('zil') for pool integrity. It
does depend on cache flush across all disks for pool integrity. The
harm from sync=disabled is that when the system comes back up, the
data may not be
On 5/7/2014 10:57 PM, Dan Swartzendruber wrote:
Any documentation on getting a CIFS share working with OmniOS? All I'm
finding on the web is Solaris11, which has some key things that OmniOS
doesn't in smbadm, so the instructions fall apart rather early.
OpenIndiana instructions?
None
Any documentation on getting a CIFS share working with OmniOS? All I'm
finding on the web is Solaris11, which has some key things that OmniOS
doesn't in smbadm, so the instructions fall apart rather early.
OpenIndiana instructions?
___
@1.0.1.7,5.11-0.151009:20140407T29Z
These packages do not require a new BE or a reboot. You can perform this
upgrade with minimal service interruption. Please update your systems now
and restart any services that link against OpenSSL libraries to arrive at
a
safe state.
Theo, I am
Theo, I am puzzled. I updated my box, and it did create a boot
environment with the fix in it, so I can't get it until I reboot...
Maybe
I updated the wrong way? I did 'pkg image-update' which is how I
usually
do things
Dan,
If you simply do a pkg install or pkg update it will
ly referring to that subdir.
pkgadd -d CNCclusterglue.pkg
That should give you whateever's in that .pkg file. Repeat with the other
.pkg files.
Ah, okay, that's got it, thanks! Kinda puzzle at the manpage which seems
to be telling me if I do 'pkgadd' with no arguments, it will serve up
On Fri, Apr 4, 2014 at 4:14 PM, Dan Swartzendruber dswa...@druber.com
wrote:
Ah, okay, that's got it, thanks! Kinda puzzle at the manpage which
seems
to be telling me if I do 'pkgadd' with no arguments, it will serve up
any
packages in /var/spool/pkg and if I give '-d SOMEDIR', it will do
I have had this with current omnios and esxi 5·1 but not guest vs. Haven't
tried with esxi 5·5 yet.
Ben Summers b...@fluffy.co.uk wrote:
Alexander
I note this is a VMware VM. If you install VMware tools, you will get crashes
when you power off the VM in some versions of VMware.
Which
This is all very strange. I saw stuff like this all the time when I was
using ZFS on Linux, due to timing where an HBA would not present devices
quickly enough, resulting in missing pools, missing/unmounted datasets,
etc, which would all get 'fixed' if you manually re-did them, but I've
never
I've seen that bug on SmartOS. Fixed in the last month or two.
Any explanation as to what was happening?
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
+--
| On 2014-03-05 12:29:57, Dan Swartzendruber wrote:
|
| Any explanation as to what was happening?
This is the bug I was hitting: http://smartos.org/bugview/OS-2616
Devices wouldn't be available at boot
Re-reading Mark's reply, I'm not so sure about my answer. I want to take
a look at this...
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
I can't actually swear that a dynamically grown LUN wouldn't work, it just
seems unlikely, but who knows? I agree the simplest thing is to have them
give him another bigger lun, mirror them, detach the first, then make sure
the new one gets grown...
Yes, that's the property I was thinking of. I don't have an easy way to
confirm/disprove that an iSCSI lun being resized will work for the OP - I
am dubious, since the auto-partition code creates (AFAIR) a full-sized EFI
partition. OmniOS would have to resize it and do who knows what juju.
Fascinating. Thanks for the experiment!
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
I will try that as soon as I can take down the array. 10 Gbit on one
array shouldn't need more than 2 new CPU cores eh?
Well, it might or it might not. You'd have to know for sure that even if
it does, the guest is multi-threaded enough to take advantage of them.
Note that if you give too
I've been running an install on r151008. rpool mirrored on two WD black
160GB sata drives. 8 SAS nearline drives in a raid10. Two samsung 840PRO
128GB drives as l2arc. The sas and l2arc drives are on an LSI HBA, and
the rpool on motherboard sata ports. Twice now, I've had drives disappear
-
This is perfect, I was reading on Hardforum from a user named GEA saying
that its unsupported and not recommended about a year ago so i figured why
not bring the question to the forum and get the answer to Why!
I seem to recall there being sound reasons not to share the same device
between
On my systems, the X-2 cards claim 32 gbps. I sued IPoIB, and have
seen transfers NFS transfers over 6 gbps and snapshot send/receive as
high as 1.8 gbps while the system was simultaneously utilized during
cluster computation.
Ian
I assume you meant 'used', not 'sued'? If not, your lawyer
Well, this is all very encouraging! I guess there is interest and actual
usage out there.
Unfortunately, I need FDR-ish speeds (at least 4GB/s), the entire network
is setup for FDR, and I only have ConnectX-3 adapters, so it's Linux for
me
until some new developments emerge.
Dan, I can
Many thanks Günther and Dan for your answers. But still I have some
doubts:
a) I can't assign real hardware access to this VM, but i can assign
disks using RDM.
b) How can I calculate how many Arc-cache do I need??
c) Is it safe to use vmxnet3 nic driver for this production
environment??
Hi all,
Next month, we need to deploy an omnios virtual machine to act as storage
server under esxi 5.1. We can not use bare metal server, and I know that
is
a problem.
We are thinking in disable zil. Is it recommended when omnios is installed
as a vm?? And we can not use pci passthrough
I have an IBM M1015 HBA connected to a JBOD chassis with 6 SAS disks in a
3x2 raid10. I was running this under ZFS on linux with no issues
(virtualized under ESXi 5.1). I installed OmniOS with the latest updates,
shutdown the Ubuntu ZoL guest, removed the HBA from its config, added the
PCI card
Was this stable or bloody? There were some recent changes to better
support the MPT Skinny firmware on HBAs like the M1015:
https://www.illumos.org/issues/3500
If you've got driver/storage/mr_sas@0.5.11,5.11-0.151005 as of April
10, you should have these changes. They do not exist in
On Mon, Apr 29, 2013 at 5:10 PM, Richard Elling
richard.ell...@richardelling.com wrote:
This is a power management issue. The drive firmware can be set to
handle
the
power on explicitly or not. See the power-condition setting in the
sd(7d)
man page.
This is what I was alluding to as
In message
793607784c777520a314a1288701f0f6.squir...@webmail.druber.com,
Dan
Swartzendruber writes:
Sorry for being dense. This is all new to me. What is the right
setting?
URL:http://news.gmane.org/find-root.php?message_id=%3cEMEW3%7c84ac1acf080a7d49cc81c6de93a91962o4F9kV09jan%2dpeter
I happened to notice after the appliance had been running for awhile that
3 services are shown as being in maintenance state.
root@omnios-appliance2:~# svcs -xv
svc:/network/rpc/gss:default (Generic Security Service)
State: maintenance since April 23, 2013 11:33:17 AM EDT
Reason:
I thought we were taking about running OmniOS inside of vmware. My
mistake.
We are. And OmniOS is serving up files to Vmware via NFS. But the files
it is serving up are for other VMS, not OmniOS (that would be a
chickenegg problem, which is why 'all in one' setups like this require a
small
On 4/24/2013 7:13 AM, Dan Swartzendruber wrote:
I happened to notice after the appliance had been running for awhile
that
3 services are shown as being in maintenance state.
Hmm, per the description it seems the state is entered upon
administrator request.
What do the logs in /var/svc
New to the list, so forgive me if this is an FAQ (I googled and turned up
nothing.) I just installed omnios as a ZFS virtual appliance serving up
virtual disks to ESXi via NFS. Working great. Is there a way to install
the standard zfs auto-snapshot service? Thanks!
I happened to notice after the appliance had been running for awhile that
3 services are shown as being in maintenance state.
root@omnios-appliance2:~# svcs -xv
svc:/network/rpc/gss:default (Generic Security Service)
State: maintenance since April 23, 2013 11:33:17 AM EDT
Reason: Maintenance
On Tue, Apr 23, 2013 at 4:33 PM, Dan Swartzendruber dswa...@druber.com
wrote:
Trivial repro. On ESXi 5.1 with latest patches. Download OVA for
omnios
bloody first boot.
Did you run a 'pkg update' after setting up the system based on this
image? That OVA was community-contributed
41 matches
Mail list logo