, Nov 25, 2014 at 2:04 PM, Schweiss, Chip c...@innovates.com
wrote:
Started an installation a new Haswell based Supermicro server. (*X10DRU-i+
*
http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRU-i_.cfmmotherboard)
The floppy controller is completely gone from these systems
On Tue, Nov 25, 2014 at 3:34 PM, Paul B. Henson hen...@acm.org wrote:
On Tue, Nov 25, 2014 at 03:04:35PM -0600, Schweiss, Chip wrote:
The floppy controller is completely gone from these systems and causing
the
installer to panic when loading.
Anyone aware of a workaround?
Check
X2APCI is the culprit. Needs to be disabled in the BIOs. Worked without
the floppy at all.
Hopefully this will be useful to someone else.
-Chip
On Tue, Nov 25, 2014 at 3:49 PM, Schweiss, Chip c...@innovates.com wrote:
Beginning to suspect this is something else. OpenIndiana's ISO panics
I had a system crash this morning when I/O on a pool was hung with
failmode=panic.
That was 7 1/2 hours ago and it's at 64%.I'm going to let it complete
so the failure can be analyzed more, but I really need to find a way to
significantly speed up the crash dumps.
This system has 256GB ram
On Wed, Nov 5, 2014 at 2:36 PM, Dan McDonald dan...@omniti.com wrote:
On Nov 5, 2014, at 3:31 PM, Schweiss, Chip via illumos-discuss
disc...@lists.illumos.org wrote:
I had a system crash this morning when I/O on a pool was hung with
failmode=panic.
What version of OmniOS are you
I went to grab the latest Bloody today, the download links seems to be
broken on this page: http://omnios.omniti.com/wiki.php/Installation
Is there a better link?
-Chip
On Tue, Oct 21, 2014 at 12:17 AM, Dan McDonald dan...@omniti.com wrote:
Pardon the delay. I was waiting for one of the new
That did it. Thanks!
-Chip
On Tue, Oct 28, 2014 at 2:24 PM, Dan McDonald dan...@omniti.com wrote:
On Oct 28, 2014, at 3:22 PM, Dan McDonald dan...@omniti.com wrote:
On Oct 28, 2014, at 3:18 PM, Schweiss, Chip c...@innovates.com wrote:
I went to grab the latest Bloody today
I've been regularly using OmniOS on ESXi since r151006 without issue. I'm
stuck trying to install on VMware Workstation 10 on Windows 7 64bit.
At the Welcome to OmniOS screen it dies after pressing F2 and returns to
installation menu every time.
I've tried telling VMware it's Solaris 11/64,
.
It should work.
Fred
2014-10-28 22:00 GMT+01:00 Schweiss, Chip c...@innovates.com:
I've been regularly using OmniOS on ESXi since r151006 without issue.
I'm stuck trying to install on VMware Workstation 10 on Windows 7 64bit.
At the Welcome to OmniOS screen it dies after pressing F2
Recently we had a quote from Dell w/ Nexenta. They specify Intel NICs and
LSI HBAs when building with Nexenta.
-Chip
On Mon, Oct 20, 2014 at 10:58 AM, Marion Hakanson hakan...@ohsu.edu wrote:
I don't know if it's true for the R730xd, but Dell will equip your R720xd
with an LSI-branded HBA
On Thu, Oct 9, 2014 at 9:54 PM, Dan McDonald dan...@omniti.com wrote:
On Oct 9, 2014, at 10:23 PM, Schweiss, Chip c...@innovates.com wrote:
Just tried my 2nd system. r151010 nlockmgr starts after clearing
maintenance mode. r151012 it will not start at all. nfs/status was
enabled
mapped to a different location.
nlockmgr is becoming a real show stopper.
-Chip
I've done 3 other upgrades to r151012 and none of them had a problem
with nlockmgr...
Kevin
On 10/06/2014 09:56 AM, Schweiss, Chip wrote:
On Mon, Oct 6, 2014 at 9:59 AM, Dan McDonald dan...@omniti.com
On Mon, Oct 6, 2014 at 9:59 AM, Dan McDonald dan...@omniti.com wrote:
On Oct 6, 2014, at 10:41 AM, Schweiss, Chip c...@innovates.com wrote:
Anyone else seeing this in r151012?
Any tips on collecting better information on this would be appreciated.
I saw this in once in 012
Is you RHEL 6.5 client a virtual machine? If so this message is a red
herring to your problem.
See the VMware KB article:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009996
I run all NFSv4 from CentOS 6.5 to OmniOS, but do not use kerberos.
The issue I have seen was different. I experienced the same NFSv3 lock
manager failures, but with Linux clients. I switched all mounts except
VMware to NFSv4 and things became MUCH more stable.
I still see periodically the NFS server become unresponsive. The server
never crashes or the lock
Ctrl-C. The resulting dump
file can be examined in wireshark.
-Chip
THanks!
On Mon, Jun 16, 2014 at 10:08 AM, Schweiss, Chip c...@innovates.com
wrote:
The issue I have seen was different. I experienced the same NFSv3 lock
manager failures, but with Linux clients. I switched all mounts
You may have revealed the cause of a problem I've seen a few times, but
have not made the correlation. In my case we have 100+ CentOS NFS clients
and a periodic use of 2012R2 server connecting via NFS.
I have had a few drop offs of the NFS server with out a single line in the
event logs, just
The 840 Pro doesn't have a super cap, but it does properly honor cache
flushes which ZFS will do on a log device. This drastically reduces it's
write performance and makes it a poor choice for a log device.
Intel has several SATA SSDs with proper super-cap protected caches that
make good log
On Wed, May 28, 2014 at 9:46 AM, Doug Hughes d...@will.to wrote:
Second this. The DC S3700 are very good.
But, I tend to use the Intel 320 which are often available on amazon for
just over $1/GB up to 600GB. They don't have as good of specs as the DC3700
(which are newer), but they do have
On Wed, May 28, 2014 at 1:55 PM, Dan Swartzendruber dswa...@druber.comwrote:
It looks to me like Sa¨o's design is active/standby failover. Zpool
import on the standby should obtain a clean transaction group as long
as the originally active system is still not using the pool. The
result
I was looking forward to ZFS bookmarks, but it appears they are not working
yet.
I upgraded one of my test VMs and tried them:
root@ZFSsendTest1:~# zfs snapshot testpool/zfs_send@snap_for_bookmark_test1
root@ZFSsendTest1:~# zfs bookmark
testpool/zfs_send@snap_for_bookmark_testbookmark#1
cannot
On Wed, May 7, 2014 at 2:17 PM, Schweiss, Chip c...@innovates.com wrote:
I was looking forward to ZFS bookmarks, but it appears they are not
working yet.
I upgraded one of my test VMs and tried them:
root@ZFSsendTest1:~# zfs snapshot
testpool/zfs_send@snap_for_bookmark_test1
root
Looks like the documentation needs to a bit clearer on the syntax. This
worked:
root@ZFSsendTest1:~# zfs bookmark
testpool/zfs_send@snap_for_bookmark_test1testpool/zfs_send#bookmark1
On Wed, May 7, 2014 at 2:24 PM, Schweiss, Chip c...@innovates.com wrote:
On Wed, May 7, 2014 at 2:17 PM
On Tue, Apr 22, 2014 at 4:02 PM, Saso Kiselkov skiselkov...@gmail.comwrote:
# sg_write_buffer -v --in=MegalodonES3-SAS-STD-0004.LOD \
--length=1625600 --mode=5 /dev/rdsk/c9t5000C500578F774Bd0
Write buffer cmd: 3b 05 00 00 00 00 18 ce 00 00
ioctl(USCSICMD) failed with os_err (errno) =
server now sees the disks just fine.
There has to be a way to un-retire disks so they can be flashed, but I
have not found such a way.
-Chip
On Tue, Apr 22, 2014 at 11:36 AM, Schweiss, Chip c...@innovates.com wrote:
On Tue, Apr 22, 2014 at 11:15 AM, Richard Elling
richard.ell
22, 2014 at 4:36 AM, Saso Kiselkov skiselkov...@gmail.comwrote:
On 4/18/14, 10:49 PM, Schweiss, Chip wrote:
I used Santools, which is a licensed product.
From what I understand lsiutil and sg_buffer_write from sg3-utils can do
it too. The mode for sg_buffer_write may need to be set to 7
On Tue, Apr 22, 2014 at 11:15 AM, Richard Elling
richard.ell...@richardelling.com wrote:
On Apr 21, 2014, at 9:19 AM, Schweiss, Chip c...@innovates.com wrote:
I suspecting these drives have self-destructed.
Can anyone confirm this firmware issue causes the drives to permanently go
offline
mpathadm also panics the kernel on OmniOS if there are any offline disks.
Proceed with caution.
On Tue, Apr 22, 2014 at 3:08 PM, Richard Elling
richard.ell...@richardelling.com wrote:
On Apr 22, 2014, at 10:58 AM, Saso Kiselkov skiselkov...@gmail.com
wrote:
On 4/22/14, 5:03 PM, Schweiss
Kiselkov skiselkov...@gmail.comwrote:
On 4/18/14, 10:49 PM, Schweiss, Chip wrote:
I used Santools, which is a licensed product.
From what I understand lsiutil and sg_buffer_write from sg3-utils can do
it too. The mode for sg_buffer_write may need to be set to 7 instead of
5 as stated
I suspecting these drives have self-destructed.
Can anyone confirm this firmware issue causes the drives to permanently go
offline?
-Chip
On Mon, Apr 21, 2014 at 8:12 AM, Schweiss, Chip c...@innovates.com wrote:
I have 20 disks that went offline because they reached 40C before I
applied
multipathing and I'm getting them to flash.
-Chip
On Thu, Apr 17, 2014 at 12:49 PM, Saso Kiselkov skiselkov...@gmail.comwrote:
On 4/17/14, 6:27 PM, Schweiss, Chip wrote:
Use the short form of the S/N: Z1Y18H7V
Ok, thanks, didn't know there two forms... (FMA only prints one).
--
Saso
...@gmail.comwrote:
On 4/18/14, 9:23 PM, Schweiss, Chip wrote:
I've flashed 0004 to some of my Constellations so far. The drives are
now set at a reference temperature of 60C which is much better than 40C.
I had to disable mulltipathing to get these disks to flash. I'm not
sure
You can get the Seagate firmwares from this link:
https://apps1.seagate.com/downloads/request.html
Seems they don't link to this on their site any more I found it in an old
email from their site.
-Chip
On Tue, Apr 15, 2014 at 5:30 PM, Saso Kiselkov skiselkov...@gmail.comwrote:
Hi,
I've
A recent change in the NLM for NFSv3 has exposed a problem with the
firewall on Redhat/CentOS.
Connections back to the client are blocked by the firewall because the
connection tracking module is not catching connections as part of the open
NFS connections to the server.
I have attempted to
I have used LSI HBAs exclusively. Performance and reliability has been
very good.
The only problem I have consistently seen is if I hotplug a sas expander
with or without disks attached it will crash the system at least half the
time. I have simple resolved never doing that hot.
I have stuck
101 - 135 of 135 matches
Mail list logo