Re: [OmniOS-discuss] compiling mediatomb #error non-amd64 code depends on amd64 privileged header!

2014-05-12 Thread Natxo Asenjo
hi,

Tonight I'll try that, I suppose i should look in the Makefile?

regards,
natxo

--
Groeten,
natxo


On Sun, May 11, 2014 at 11:02 PM, Dan McDonald dan...@omniti.com wrote:

 Okay.  Our headers are 64-bit aware, maybe lose -I/usr/include/amd64 ?

 Dan

 Sent from my iPhone (typos, autocorrect, and all)

  On May 11, 2014, at 3:00 PM, Natxo Asenjo natxo.ase...@gmail.com
 wrote:
 
  hi,
 
  trying to use my home omnios server for streaming media I get this error
 when making mediatomb:
 
  g++ -DHAVE_CONFIG_H -I. -I.. -I../tombupnp/upnp/inc-I../src
 -I../tombupnp/ixml/inc -I../tombupnp/threadutil/inc -I../tombupnp/upnp/inc
 -I..   -I/usr/local/include-D_REENTRANT -pthreads
  -I/usr/include/amd64-g -O2 -MT libmediatomb_a-action_request.o -MD -MP
 -MF .deps/libmediatomb_a-action_request.Tpo -c -o
 libmediatomb_a-action_request.o `test -f '../src/action_request.cc' || echo
 './'`../src/action_request.cc
  In file included from /usr/include/sys/regset.h:420:0,
   from /usr/include/sys/ucontext.h:36,
   from /usr/include/sys/signal.h:245,
   from /usr/include/sys/procset.h:42,
   from /usr/include/sys/wait.h:43,
   from /usr/include/stdlib.h:40,
   from ../src/memory.h:35,
   from ../src/common.h:36,
   from ../src/action_request.h:36,
   from ../src/action_request.cc:36:
  /usr/include/amd64/sys/privregs.h:42:2: error: #error non-amd64 code
 depends on amd64 privileged header!
   #error non-amd64 code depends on amd64 privileged header!
^
  make[2]: *** [libmediatomb_a-action_request.o] Error 1
  make[2]: Leaving directory `/root/mediatomb-0.12.1/build'
  make[1]: *** [all-recursive] Error 1
  make[1]: Leaving directory `/root/mediatomb-0.12.1'
  make: *** [all] Error 2
 
  There are quite a few hits on google regarding this #error non-amd64
 code depends on amd64 privileged header! on omnios, but so far I could not
 find a solution.
 
  I am building this on a zone with gcc-4.8.1.
 
  Any help greatly appreciated.
 
  --
  Groeten,
  natxo
  ___
  OmniOS-discuss mailing list
  OmniOS-discuss@lists.omniti.com
  http://lists.omniti.com/mailman/listinfo/omnios-discuss

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] compiling mediatomb #error non-amd64 code depends on amd64 privileged header!

2014-05-12 Thread Lauri Tirkkonen
On Sun, May 11 2014 21:57:26 +0200, Natxo Asenjo wrote:
 # grep m64 Makefile
 CFLAGS = -g -O2 -m64
 
 Or should I do it differently? I am not really sure ...

It depends on the build system of the software you're trying to build.
Your make output before included g++, so it is likely that you need to
add -m64 to CXXFLAGS as well.

-- 
Lauri Tirkkonen | +358 50 5341376 | lotheac @ IRCnet
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] fmdump help?

2014-05-12 Thread Johan Kragsterman
Hi!

Got some fmdump issues here that I would appreciate someone to help me diagnos.


System is as you can see a Dell T5500 workstation, equipped with dual xeon 
L5520 with HT enabled, and 36 GB of ram. Bge integrated nic on mobo is 
disabled, and I use a quad port Gbe Intel nic at the PCI-X slot.

Got the rpool on an Intel SLC SSD on the motherboard integrated SATA controller.

Got a Dell H200 flashed to IT f/w(LSI2008), anounces itself as a Dell 6 gb HBA, 
connected to two Seagate ST4000VN000, and a Samsung 840 EVO SSD as a L2Arc 
device.



root@omni:~# fmdump -p
TIME UUID SUNW-MSG-ID EVENT
maj 01 13:16:25.9491 bf630a54-1d96-6b2b-e6e9-e3347c1ba7f3 ZFS-8000-D3 Diagnosed
maj 10 21:49:13.8088 431d3b05-328c-4ec2-d83a-f58a006ea156 SUNOS-8000-J0 
Diagnosed
maj 10 21:49:14.0433 f0a4a159-daf5-41c9-b948-d68055fb5a48 SUNOS-8000-J0 
Diagnosed
maj 10 21:49:14.6796 87a8a141-fa1f-6bed-f25d-b467e130c85d PCIEX-8000-43 
Diagnosed




Of the fmdumps, The last three from may 10 are the ones I'm interested in, and 
I choosed to display two of them here.one is severity Major, and the other 
one is Critical




I see some defect.sunos.eft.unexpected_telemetry, class and path are 
incompatible and fault.io.pci.bus-linkerr.

Unfortunatly though, can't tell what it means.











root@omni:~# fmdump -V -u 431d3b05-328c-4ec2-d83a-f58a006ea156
TIME   UUID SUNW-MSG-ID
maj 10 2014 21:49:13.808892000 431d3b05-328c-4ec2-d83a-f58a006ea156 
SUNOS-8000-J0

  TIME CLASS ENA
  maj 10 21:47:03.1897 ereport.io.pcix.unex-spl  0x32b407bf59f01001

nvlist version: 0
version = 0x0
class = list.suspect
uuid = 431d3b05-328c-4ec2-d83a-f58a006ea156
code = SUNOS-8000-J0
diag-time = 1399751353 665690
de = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = fmd
authority = (embedded nvlist)
nvlist version: 0
version = 0x0
product-id = Precision-WorkStation-T5500
chassis-id = 17BPY4J
server-id = omni
(end authority)

mod-name = eft
mod-version = 1.16
(end de)

fault-list-sz = 0x2
fault-list = (array of embedded nvlists)
(start fault-list[0])
nvlist version: 0
version = 0x0
class = defect.sunos.eft.unexpected_telemetry
certainty = 0x32
resource = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = hc
hc-root = 
authority = (embedded nvlist)
nvlist version: 0
product-id = Precision-WorkStation-T5500
server-id = omni
chassis-id = 17BPY4J
(end authority)

hc-list-sz = 0x6
hc-list = (array of embedded nvlists)
(start hc-list[0])
nvlist version: 0
hc-name = motherboard
hc-id = 0
(end hc-list[0])
(start hc-list[1])
nvlist version: 0
hc-name = hostbridge
hc-id = 0
(end hc-list[1])
(start hc-list[2])
nvlist version: 0
hc-name = pciexrc
hc-id = 0
(end hc-list[2])
(start hc-list[3])
nvlist version: 0
hc-name = pciexbus
hc-id = 1
(end hc-list[3])
(start hc-list[4])
nvlist version: 0
hc-name = pciexdev
hc-id = 0
(end hc-list[4])
(start hc-list[5])
nvlist version: 0
hc-name = pciexfn
hc-id = 0
(end hc-list[5])

(end resource)

reason = 
ereport.io.pcix.unex-spl@motherboard0/hostbridge0/pciexrc0/pciexbus1/pciexdev0/pciexfn0
 class and path are incompatible
retire = 0
response = 0
asru = (embedded nvlist)
nvlist version: 0
scheme = mod
version = 0x0
mod-id = 86
 

[OmniOS-discuss] Ang: fmdump help?

2014-05-12 Thread Johan Kragsterman
Hi again!


Got some more info about what I wrote last. Is this a hardware problem?


I did some dtrace of the dump, and got this:



root@omni:/var/crash/unknown# savecore -f /var/crash/unknown/vmdump.1
savecore: System dump time: Sat May 10 21:47:04 2014

savecore: saving system crash dump in /var/crash/unknown/{unix,vmcore}.1
Constructing namelist /var/crash/unknown/unix.1
Constructing corefile /var/crash/unknown/vmcore.1
 0:41 100% done: 607251 of 607251 pages saved
root@omni:/var/crash/unknown# mdb -k unix.1 vmcore.1
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp 
scsi_vhci zfs sata sd ip hook neti sockfs arp usba uhci stmf stmf_sbd md lofs 
mpt_sas random idm nfs crypto ptm kvm cpc smbsrv ufs logindmux nsmb ]

 ::status
debugging crash dump vmcore.1 (64-bit) from omni
operating system: 5.11 omnios-8c08411 (i86pc)
image uuid: e43a2059-c9b8-e592-b307-f05eafbbe15b
panic message: pcieb-0: PCI(-X) Express Fatal Error. (0x145)
dump content: kernel pages only


 ::stack
vpanic()
pcieb_intr_handler+0x1c9(ff0a1da39830, 0)
av_dispatch_autovect+0x95(49)
dispatch_hardint+0x36(49, 0)
switch_sp_and_call+0x13()
do_interrupt+0xa8(ff0047e9d110, fe03e383e000)
_interrupt+0xba()
htable_lookup+0x73(ff0a08ecce78, fe03e383e000, 1)
htable_getpte+0x58(ff0a08ecce78, fe03e383e000, ff0047e9d2ec, 
ff0047e9d2e0, 1)
htable_getpage+0x30(ff0a08ecce78, fe03e383e000, ff0047e9d34c)
hat_getpfnum+0x71(ff0a08ecce78, fe03e383e000)
kvm_va2pa+0x1b()
mmu_alloc_roots+0xaa()
kvm_mmu_load+0x40()
kvm_mmu_reload+0x18()
vcpu_enter_guest+0x68()
__vcpu_run+0x8b()
kvm_arch_vcpu_ioctl_run+0x112()
kvm_ioctl+0x466()
cdev_ioctl+0x39(1080005, 2000ae80, 0, 202003, ff0a2c4995e8, 
ff0047e9dea8)
spec_ioctl+0x60(ff0a2c875380, 2000ae80, 0, 202003, ff0a2c4995e8, 
ff0047e9dea8) 
fop_ioctl+0x55(ff0a2c875380, 2000ae80, 0, 202003, ff0a2c4995e8, 
ff0047e9dea8)
ioctl+0x9b(d, 2000ae80, 0)
sys_syscall+0x17a()



 ::msgbuf
MESSAGE   
vcpu 7 received sipi with vector # 10
vcpu 6 received sipi with vector # 10
kvm_lapic_reset: vcpu=ff0a38b5a000, id=2, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
kvm_lapic_reset: vcpu=ff0a38b52000, id=3, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
kvm_lapic_reset: vcpu=ff0a38b4a000, id=4, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
kvm_lapic_reset: vcpu=ff0a38ba2000, id=5, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
kvm_lapic_reset: vcpu=ff0a38b92000, id=7, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
kvm_lapic_reset: vcpu=ff0a38b9a000, id=6, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
unhandled wrmsr: 0x0 data 0
vcpu 1 received sipi with vector # 98
kvm_lapic_reset: vcpu=ff0a38b62000, id=1, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
vcpu 2 received sipi with vector # 98
kvm_lapic_reset: vcpu=ff0a38b5a000, id=2, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
vcpu 3 received sipi with vector # 98
kvm_lapic_reset: vcpu=ff0a38b52000, id=3, base_msr= fee00800 PRIx64 
base_addre
ss=fee0
vcpu 4 received sipi with vector # 98
kvm_lapic_reset: vcpu=ff0a38b4a000, id=4, base_msr= fee00800 PRIx64 
base_address=f
ee0
vcpu 5 received sipi with vector # 98
kvm_lapic_reset: vcpu=ff0a38ba2000, id=5, base_msr= fee00800 PRIx64 
base_address=f
ee0
vcpu 6 received sipi with vector # 98
kvm_lapic_reset: vcpu=ff0a38b9a000, id=6, base_msr= fee00800 PRIx64 
base_address=f
ee0
vcpu 7 received sipi with vector # 98
kvm_lapic_reset: vcpu=ff0a38b92000, id=7, base_msr= fee00800 PRIx64 
base_address=f
ee0
kvm_lapic_reset: vcpu=ff0a38ba2000, id=0, base_msr= fee00100 PRIx64 
base_address=f
ee0
vmcs revision_id = e
kvm_lapic_reset: vcpu=ff0a38b4a000, id=1, base_msr= fee0 PRIx64 
base_address=f
ee0
vmcs revision_id = e
unhandled wrmsr: 0x1010101 data fd7fffdfe870
unhandled wrmsr: 0x1010101 data fd7fffdfe870
unhandled wrmsr: 0xff318d0c data fd7fffdfe840
unhandled wrmsr: 0xff318d0c data fd7fffdfe840
unhandled wrmsr: 0xffdfef38 data 301a4
unhandled wrmsr: 0xffdfef38 data 301a4
vcpu 1 received sipi with vector # 10
kvm_lapic_reset: vcpu=ff0a38b4a000, id=1, base_msr= fee00800 PRIx64 
base_address=f
ee0
unhandled rdmsr: 0x756e6547
unhandled wrmsr: 0x0 data 6c65746e756e6547
vcpu 1 received sipi with vector # 9f
kvm_lapic_reset: vcpu=ff0a38b4a000, id=1, base_msr= fee00800 PRIx64 
base_address=f
ee0
kvm_lapic_reset: vcpu=ff0a38b52000, id=0, base_msr= fee00100 PRIx64 
base_address=f
ee0
vmcs revision_id = e
kvm_lapic_reset: vcpu=ff0a38b5a000, id=1, base_msr= fee0 PRIx64 
base_address=f
ee0
vmcs revision_id = e
kvm_lapic_reset: vcpu=ff0a38b62000, id=2, base_msr= fee0 PRIx64 
base_address=f
ee0
vmcs revision_id = e
kvm_lapic_reset: vcpu=ff0a384e9000, id=3, base_msr= fee0 

Re: [OmniOS-discuss] Ang: fmdump help?

2014-05-12 Thread Johan Kragsterman
Thanks again, Dan!


Some more questions further down...


-Dan McDonald dan...@omniti.com skrev: -
Till: Johan Kragsterman johan.kragster...@capvert.se
Från: Dan McDonald dan...@omniti.com
Datum: 2014-05-12 15:46
Kopia: OmniOS-discuss@lists.omniti.com omnios-discuss@lists.omniti.com
Ärende: Re: [OmniOS-discuss] Ang: fmdump help?


On May 12, 2014, at 8:46 AM, Johan Kragsterman johan.kragster...@capvert.se 
wrote:



 panic message: pcieb-0: PCI(-X) Express Fatal Error. (0x145)






Does this mean it is the PCI-X bus? And/or a device on that bus? It makes sense 
if so, because the e1000g3 is on an Intel quad port PCI-X adapter on the only 
PCI-X bus on the system. And I had severe issues with a client connected to 
that port. But could a port issue really crash the system? Wouldn't it be more 
likely that it is the bus?

First step will be that I'll change the connections to that port to another 
port on the same nic, and see if it'll be some changes.

If I still got problems, I'll change the nic to a similar, and if that doesn't 
help, I put another nic on a PCIe-bus instead.







That's these flags from pcie_impl.h (viewable from the source, it's not an 
installed system header file):

#define PF_ERR_NO_ERROR         (1  0) /* No error seen */
#define PF_ERR_NO_PANIC         (1  2) /* Error should not panic sys */
#define PF_ERR_PANIC            (1  6) /* Error should panic system */
#define PF_ERR_MATCH_DOM        (1  9) /* Error Handled By IO domain */

That's a lot of flags set, and all of this flag-setting happens during a fault 
scan of the PCIe bus (see pcie_fault.c, especially starting with 
pf_scan_fabric() and its descendants).

I'd be inclined to say this is a HW error, especially given your e1000g3 device 
complained, per here:

NOTICE: e1000g3 link down
NOTICE: vnic1000 link down
NOTICE: e1000g3 link up, 100 Mbps, full duplex
NOTICE: vnic1000 link up, 100 Mbps, unknown duplex
NOTICE: SUNW-MSG-ID: SUNOS-8000-0G, TYPE: Error, VER: 1, SEVERITY: Major

Dan


Rgrds Johan



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Ang: fmdump help?

2014-05-12 Thread Dan McDonald

On May 12, 2014, at 11:06 AM, Johan Kragsterman johan.kragster...@capvert.se 
wrote:

 Thanks again, Dan!
 
 
 Some more questions further down...
 
 
 
 
 Does this mean it is the PCI-X bus? And/or a device on that bus? It makes 
 sense if so, because the e1000g3 is on an Intel quad port PCI-X adapter on 
 the only PCI-X bus on the system. And I had severe issues with a client 
 connected to that port. But could a port issue really crash the system? 
 Wouldn't it be more likely that it is the bus?

The error message originates from the pcieb (PCI-E bus controller):

161 f8077000   4440 228   1  pcieb (PCIe bridge/switch driver)

and yes it's likely the bus, as that message/panic happens after a bus scan.  I 
indicated e1000g3 so you could maybe see if the slot it was in was bad.

 First step will be that I'll change the connections to that port to another 
 port on the same nic, and see if it'll be some changes.
 
 If I still got problems, I'll change the nic to a similar, and if that 
 doesn't help, I put another nic on a PCIe-bus instead.
 

That's what I'd do.

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Ang: fmdump help?

2014-05-12 Thread Johan Kragsterman


-Dan McDonald dan...@omniti.com skrev: -
Till: Johan Kragsterman johan.kragster...@capvert.se
Från: Dan McDonald dan...@omniti.com
Datum: 2014-05-12 17:15
Kopia: OmniOS-discuss@lists.omniti.com omnios-discuss@lists.omniti.com
Ärende: Re: [OmniOS-discuss] Ang: fmdump help?

On May 12, 2014, at 11:06 AM, Johan Kragsterman johan.kragster...@capvert.se 
wrote:

 Thanks again, Dan!
 
 
 Some more questions further down...
 
 
 
 
 Does this mean it is the PCI-X bus? And/or a device on that bus? It makes 
 sense if so, because the e1000g3 is on an Intel quad port PCI-X adapter on 
 the only PCI-X bus on the system. And I had severe issues with a client 
 connected to that port. But could a port issue really crash the system? 
 Wouldn't it be more likely that it is the bus?

The error message originates from the pcieb (PCI-E bus controller):

161 f8077000   4440 228   1  pcieb (PCIe bridge/switch driver)

and yes it's likely the bus, as that message/panic happens after a bus scan.  I 
indicated e1000g3 so you could maybe see if the slot it was in was bad.

 First step will be that I'll change the connections to that port to another 
 port on the same nic, and see if it'll be some changes.
 
 If I still got problems, I'll change the nic to a similar, and if that 
 doesn't help, I put another nic on a PCIe-bus instead.
 

That's what I'd do.

Dan




The nic is on a PCI-X bus, not a PCIe bus. All nic ports on the system are on 
that PCI-X nic. No nic on PCIe. Does that mean that the e1000g3 had nothing to 
do with the problem?
And that the problem must be on a PCIe bus/device?

If so, I can rule out the nic. And concentrate on other devices/buses.

The only adapters that are in PCIe slot/buses are the SAS controller and the 
graphics adapter. Or perhaps the integrated SATA controller as well is on a 
PCIe bus...

I actually got two more of these T5500, so I could easily switch to another 
one, if I needed that.





___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Ang: fmdump help?

2014-05-12 Thread Dan McDonald
I'm not sure if that code is common to PCI-X as well.  After all, the printf 
message mentions PCI-X (but maybe as a typo)?

And interrupts from PCI-X may still sabotage PCIe.  I'd continue to focus on 
that NIC for starters (and save the dumps if you've the disk space).

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] differencies in booting process for KVM VM's

2014-05-12 Thread Johan Kragsterman
Hi!

Just have a question about why it looks so different on the consol when booting 
different KVM VM's?

When I boot a pfsense VM, with single socket single core dual thread 2 G memory 
4 vnics, it shows this:



root@omni:/# /usr/bin/vmpfsense.sh

qemu-system-x86_64: -net 
vnic,vlan=0,name=net0,ifname=pfwan0,macaddr=2:8:20:62:62:61: vnic dhcp disabled

qemu-system-x86_64: -net 
vnic,vlan=1,name=net1,ifname=pflan0,macaddr=2:8:20:a4:87:27: vnic dhcp disabled

qemu-system-x86_64: -net 
vnic,vlan=2,name=net2,ifname=pftlout0,macaddr=2:8:20:a2:d6:6d: vnic dhcp 
disabled

qemu-system-x86_64: -net 
vnic,vlan=3,name=net3,ifname=pftlin0,macaddr=2:8:20:3f:d2:7f: vnic dhcp disabled

Started VM: PfSense2.13
VNC available at: host IP 127.0.0.1
192.168.255.8
0.0.0.0
0.0.0.0
0.0.0.0
0.0.0.0
0.0.0.0
0.0.0.0
::1/128
::/0
::/0
::/0
::/0
::/0
::/0
::/0 port 5902
QEMU Monitor, do: # telnet localhost 7002. Note: use Control ] to exit monitor 
before quit!

root@omni:/#



And it stops with the promt.




But when I boot a ubuntu server with 2 sockets, 2 cores, two threads, 16 GB 
mem, 3 vnic, it shows this:




root@omni:/# /usr/bin/vmedubuntu.sh

qemu-system-x86_64: -net 
vnic,vlan=0,name=net0,ifname=ltsp0,macaddr=2:8:20:15:30:bc: vnic dhcp disabled

qemu-system-x86_64: -net 
vnic,vlan=1,name=net1,ifname=ltsp1,macaddr=2:8:20:83:d2:c3: vnic dhcp disabled

qemu-system-x86_64: -net 
vnic,vlan=2,name=net2,ifname=ltsp2,macaddr=2:8:20:ec:a4:57: vnic dhcp disabled

Start bios (version 0.6.1.2-20110201_165504-titi)
Ram Size=0xe000 (0x00032000 high)
CPU Mhz=2261
PCI: pci_bios_init_bus_rec bus = 0x0
PIIX3/PIIX4 init: elcr=00 0c
PCI: bus=0 devfn=0x00: vendor_id=0x8086 device_id=0x1237
PCI: bus=0 devfn=0x08: vendor_id=0x8086 device_id=0x7000
PCI: bus=0 devfn=0x09: vendor_id=0x8086 device_id=0x7010
region 4: 0xc000
PCI: bus=0 devfn=0x0b: vendor_id=0x8086 device_id=0x7113
PCI: bus=0 devfn=0x10: vendor_id=0x1013 device_id=0x00b8
region 0: 0xf000
region 1: 0xf200
region 6: 0xf201
PCI: bus=0 devfn=0x18: vendor_id=0x1af4 device_id=0x1000
region 0: 0xc020
region 1: 0xf202
region 6: 0xf203
PCI: bus=0 devfn=0x20: vendor_id=0x1af4 device_id=0x1000
region 0: 0xc040
region 1: 0xf204
region 6: 0xf205
PCI: bus=0 devfn=0x28: vendor_id=0x1af4 device_id=0x1000
region 0: 0xc060
region 1: 0xf206
region 6: 0xf207
PCI: bus=0 devfn=0x30: vendor_id=0x1af4 device_id=0x1001
region 0: 0xc080
region 1: 0xf208
Found 8 cpu(s) max supported 8 cpu(s)
MP table addr=0x000fdbd0 MPC table addr=0x000fdbe0 size=260
SMBIOS ptr=0x000fdbb0 table=0xdd90
ACPI tables: RSDP=0x000fdb80 RSDT=0xdfffd810
Scan for VGA option rom
Running option rom at c000:0003
VGABios $Id$
Turning on vga text mode console
SeaBIOS (version 0.6.1.2-20110201_165504-titi)

Found 1 lpt ports
Found 1 serial ports
ATA controller 0 at 1f0/3f4/0 (irq 14 dev 9)
ATA controller 1 at 170/374/0 (irq 15 dev 9)
found virtio-blk at 0:6
ebda moved from 9fc00 to 9dc00
drive 0x000fdb30: PCHS=16383/16/63 translation=lba LCHS=1024/255/63 s=838860800
ata1-0: QEMU DVD-ROM ATAPI-4 DVD/CD
PS2 keyboard initialized
All threads complete.
Scan for option roms
Running option rom at c900:0003
pnp call arg1=60
pmm call arg1=0
pmm call arg1=2
pmm call arg1=0
Running option rom at c980:0003
pnp call arg1=60
pmm call arg1=0
pmm call arg1=2
pmm call arg1=0
pmm call arg1=2
pmm call arg1=0
Running option rom at ca00:0003
pnp call arg1=60
pmm call arg1=0
pmm call arg1=2
pmm call arg1=0
pmm call arg1=2
pmm call arg1=0
Running option rom at ca80:0003
Returned 53248 bytes of ZoneHigh
e820 map has 8 items:
  0:  - 0009dc00 = 1
  1: 0009dc00 - 000a = 2
  2: 000f - 0010 = 2
  3: 0010 - dfffd000 = 1
  4: dfffd000 - e000 = 2
  5: feffc000 - ff00 = 2
  6: fffc - 0001 = 2
  7: 0001 - 00042000 = 1
enter handle_19:
  NULL
Booting from Hard Disk...
Booting from :7c00


And it stops without prompt, which means I can do a ctrl-c to stop the 
process?

Kinda strange differencies, imho...


Rgrds Johan


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] ZFS and Usage-creep

2014-05-12 Thread Tim Brown
I have a question about ZFS usage and how to predictability allocate
space.  I have scoured the web trying to get a good answer, but have
yet to find one.

I am advertising 2TB datastores to our VMware cluster over fiber
channel using comstar.  I use this command to create the dataset.

zfs create -V 2047g vmpool01/datastores/ds01

It all works great but some of my datasets are using far more that the
2047g(more than double in one case).  Here are some examples:

zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
...
vmpool01  27.4T  20.7T   469K  /vmpool01
vmpool01/datastores   27.4T  20.7T   384K  /vmpool01/datastores
vmpool01/datastores/ds01  3.10T  20.7T  3.10T  -
vmpool01/datastores/ds02  2.06T  21.4T  1.34T  -
vmpool01/datastores/ds03  2.69T  20.7T  2.69T  -
vmpool01/datastores/ds04  2.49T  20.7T  2.49T  -
vmpool01/datastores/ds05  3.69T  20.7T  3.69T  -
vmpool01/datastores/ds06  4.67T  20.7T  4.67T  -
vmpool01/datastores/ds07  2.47T  20.7T  2.47T  -
vmpool01/datastores/ds08  2.06T  20.8T  1.92T  -
...

Can someone explain this to me or is there a document somewhere that
can tell me how to predict the usage?  Thanks.

- Tim
-- 


Tim Brown
Network System Manager
Muskegon Area ISD
http://www.muskegonisd.org
231-767-7237

Always be yourself.
Unless you can be a pirate.
Then always be a pirate.
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Comstar Disconnects under high load.

2014-05-12 Thread David Bomba
Hi guys,

We have ~ 10 OmniOS powered ZFS storage arrays used to drive Virtual Machines 
under XenServer + VMWare using Infiniband interconnect.

Our usual recipe is to use either LSI HBA or Areca Cards in pass through mode 
using internal drives SAS drives..

This has worked flawlessly with Omnios 6/8.

Recently we deployed a slightly different configuration

HP DL380 G6
64GB ram
X5650 proc
LSI 9208-e card
HP MDS 600 / SSA 70 external enclosure
30 TOSHIBA-MK2001TRKB-1001-1.82TB SAS2 drives in mirrored configuration.

despite the following message in dmesg the array appeared to be working as 
expected

scsi: [ID 365881 kern.info] /pci@0,0/pci8086,340f@8/pci1000,30b0@0 (mpt_sas1):
May 13 04:01:07 s6  Log info 0x3114 received for target 11.

Despite this message we pushed into production and whilst the performance of 
the array has been good, as soon as we perform high write IO performance goes 
from 22k IOPS down to 100IOPS, this causes the target to disconnect from 
hypervisors and general mayhem ensues for the VMs.\

During this period where performance degrades, there are no other messages 
coming into dmesg.

Where should we begin to debug this? Could this be a symptom of not enough RAM? 
We have flashed the LSI cards to the latest firmware with no change in 
performance. 

Thanks in advance!
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Comstar Disconnects under high load.

2014-05-12 Thread Dan McDonald

On May 12, 2014, at 6:13 PM, David Bomba turbo...@gmail.com wrote:

 Hi guys,
 
 We have ~ 10 OmniOS powered ZFS storage arrays used to drive Virtual Machines 
 under XenServer + VMWare using Infiniband interconnect.
 
 Our usual recipe is to use either LSI HBA or Areca Cards in pass through mode 
 using internal drives SAS drives..
 
 This has worked flawlessly with Omnios 6/8.
 
 Recently we deployed a slightly different configuration
 
 HP DL380 G6
 64GB ram
 X5650 proc
 LSI 9208-e card
 HP MDS 600 / SSA 70 external enclosure
 30 TOSHIBA-MK2001TRKB-1001-1.82TB SAS2 drives in mirrored configuration.

1.) What was your previous configuration?  And running 006 or 008?

2.) What is running on your new HW?  006 or 008?

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Comstar Disconnects under high load.

2014-05-12 Thread David Bomba
Previous configurations were

HP DL180 G6
96GB RAM
Areca 1882-ix using passthrough
25x600Gb Toshiba MBF2600RC SAS 10k drives in mirrored config
L5640 Proc

We have 4 of these using a mixture of 006 and 008

New hardware runs 008.

On 13/05/2014, at 8:23 AM, Dan McDonald wrote:

 
 On May 12, 2014, at 6:13 PM, David Bomba turbo...@gmail.com wrote:
 
 Hi guys,
 
 We have ~ 10 OmniOS powered ZFS storage arrays used to drive Virtual 
 Machines under XenServer + VMWare using Infiniband interconnect.
 
 Our usual recipe is to use either LSI HBA or Areca Cards in pass through 
 mode using internal drives SAS drives..
 
 This has worked flawlessly with Omnios 6/8.
 
 Recently we deployed a slightly different configuration
 
 HP DL380 G6
 64GB ram
 X5650 proc
 LSI 9208-e card
 HP MDS 600 / SSA 70 external enclosure
 30 TOSHIBA-MK2001TRKB-1001-1.82TB SAS2 drives in mirrored configuration.
 
 1.) What was your previous configuration?  And running 006 or 008?
 
 2.) What is running on your new HW?  006 or 008?
 
 Dan
 

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Comstar Disconnects under high load.

2014-05-12 Thread Dan McDonald

On May 12, 2014, at 6:34 PM, David Bomba turbo...@gmail.com wrote:

 Previous configurations were
 
 HP DL180 G6
 96GB RAM
 Areca 1882-ix using passthrough
 25x600Gb Toshiba MBF2600RC SAS 10k drives in mirrored config
 L5640 Proc
 
 We have 4 of these using a mixture of 006 and 008
 
 New hardware runs 008.

Hmmm, and the same kind of load works on your older boxes?!?

One other thing -- 010 is out now.  You may wish to upgrade your newest HW to 
our newest stable release.

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Comstar Disconnects under high load.

2014-05-12 Thread David Bomba
Hi Narayan,

We do not use iSER.

We use SRP for VMWare, and IPoIB for XenServer.

In our case, our VMs operate as expected. However when copying data between
Storage Repo's that is when we see the disconnects irrespective of SCSI
transport.


On 13 May 2014 09:32, Narayan Desai narayan.de...@gmail.com wrote:

 Are you perchance using iscsi/iSER? We've seen similar timeouts that don't
 seem to correspond to hardware issues. From what we can tell, something
 causes iscsi heartbeats not to be processed, so the client eventually times
 out the block device and tries to reinitialize it.

 In our case, we're running VMs using KVM on linux hosts. The guest detects
 block device death, and won't recover without a reboot.

 FWIW, switching to iscsi directly over IPoIB works great for identical
 workloads. We've seen this with 151006 and I think 151008. We've not yet
 tried it with 151010. This smells like some problem in comstar's iscsi/iser
 driver.
  -nld


 On Mon, May 12, 2014 at 5:13 PM, David Bomba turbo...@gmail.com wrote:

 Hi guys,

 We have ~ 10 OmniOS powered ZFS storage arrays used to drive Virtual
 Machines under XenServer + VMWare using Infiniband interconnect.

 Our usual recipe is to use either LSI HBA or Areca Cards in pass through
 mode using internal drives SAS drives..

 This has worked flawlessly with Omnios 6/8.

 Recently we deployed a slightly different configuration

 HP DL380 G6
 64GB ram
 X5650 proc
 LSI 9208-e card
 HP MDS 600 / SSA 70 external enclosure
 30 TOSHIBA-MK2001TRKB-1001-1.82TB SAS2 drives in mirrored configuration.

 despite the following message in dmesg the array appeared to be working
 as expected

 scsi: [ID 365881 kern.info] /pci@0,0/pci8086,340f@8/pci1000,30b0@0(mpt_sas1):
 May 13 04:01:07 s6  Log info 0x3114 received for target 11.

 Despite this message we pushed into production and whilst the performance
 of the array has been good, as soon as we perform high write IO performance
 goes from 22k IOPS down to 100IOPS, this causes the target to disconnect
 from hypervisors and general mayhem ensues for the VMs.\

 During this period where performance degrades, there are no other
 messages coming into dmesg.

 Where should we begin to debug this? Could this be a symptom of not
 enough RAM? We have flashed the LSI cards to the latest firmware with no
 change in performance.

 Thanks in advance!
 ___
 OmniOS-discuss mailing list
 OmniOS-discuss@lists.omniti.com
 http://lists.omniti.com/mailman/listinfo/omnios-discuss



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss