I have no idea what the problem is, but it is worth noting that last
time I checked, Oracles storage arrays were running Solaris and Comstar.
Mike
On Mon, 2013-06-10 at 20:36 -0400, Heinrich van Riel wrote:
spoke to soon died again.
Give up. Just posting the result in case someone else run
I dont think they just throw a vanilla copy of the OS on vanilla hardware
for that. Like all storage providers they will have a set of specifics
around the drivers/os and firmware down the disk level/model in most cases.
We dont have access to that tested interoperability matrix and I am sure
Just want to provide an update here.
Installed Solaris 11.1 reconfigured everything. Went back to Emulex card
since it is a dual port for connect to both switches. Same problem, well
the link does not fail, but it is writing at 20k/s.
I am really not sure what to do anymore other that to accept
switch to the qlogic adpater using solaris 11.1. Problem resolved well
for now. Not as fast as OI with the emulex adapter, perhaps it is the older
pool/fs version since I want to keep my options open for now. I am getting
around 200MB/s when cloning. At least backups can run for now. Getting a
changing max-xfer-size causes the link to stay up and no problem are
reported from stmf.
# Memory_model max-xfer-size
#
# Small 131072 - 339968
# Medium 339969 - 688128
# Large 688129
I took a look at every server that I knew I could power down or that is
slated for removal in the future and I found a qlogic adapter not in use.
HBA Port WWN: 211b3280b
Port Mode: Target
Port ID: 12000
OS Device Name: Not Applicable
Manufacturer: QLogic Corp.
From: Heinrich van Riel [mailto:heinrich.vanr...@gmail.com]
I will post my findings, but might take some time to fix the network in
time and they will have to deal with 1Gbps for the storage. The request is
to run ~90 VMs on 8 servers connected.
With 90 VM's on 8 servers, being served ZFS
On 2013-06-07 14:09, Edward Ned Harvey (openindiana) wrote:
From: Heinrich van Riel [mailto:heinrich.vanr...@gmail.com]
I will post my findings, but might take some time to fix the network in
time and they will have to deal with 1Gbps for the storage. The request is
to run ~90 VMs on 8 servers
From: Jim Klimov [mailto:jimkli...@cos.ru]
With 90 VM's on 8 servers, being served ZFS iscsi storage by 4x 1Gb
ethernet in LACP, you're really not going to care about any one VM being
able to go above 1Gbit. Because it's going to be so busy all the time,
that the
4 LACP bonded ports
Thank you for all the information. Ordered the SAS SSD.
I somewhat got tired of iscsi and the networking stuff around it and went
to good ol FC. Some hypervisors will still use iSCSI.
Speed is ok
One sec apart cloning 150GB vm from a datastore on EMC to OI.
alloc free read write read write
Comment below
On 2013-06-07 20:42, Heinrich van Riel wrote:
One sec apart cloning 150GB vm from a datastore on EMC to OI.
alloc free read write read write
- - - - - -
309G 54.2T 81 48 452K 1.34M
309G 54.2T 0 8.17K 0 258M
310G 54.2T 0 16.3K 0 510M
310G 54.2T 0 0 0 0
310G
In the debug info I see 1000's of the following events:
FROM STMF:0149225: abort_task_offline called for LPORT: lport abort timed
out
FROM STMF:0149225: abort_task_offline called for LPORT: lport abort timed
out
FROM STMF:0149225: abort_task_offline called for LPORT: lport abort timed
out
FROM
New card, different PCI-E slot (removed the other one) different FC switch
(same model with same code) older hba firmware (2.72a2) = same result.
On the setting changes when it boots it complains about this option, does
not exist: szfs_txg_synctime
The changes still allowed for a constant write,
On 05/06/2013 23:52, Heinrich van Riel wrote:
Any pointers around iSCSI performance focused on read speed? Did not find
much.
I have 2 x rz2 of 10x 3TB NL-SAS each in the pool. The OI server has 4
interfaces configured to the switch in LACP, mtu=9000. The switch (jumbo
enabled) shows all
80 to 100MB/s is very low, to low. How big are the files? Due to the iscsi
caching/compressing mechanism speeds of 200MB/s a reachable. Even over 100Mb
lines.
But i saw this week on our OpenNAS server that the zfs iscsi dropped to 300Kb/s
when i tried to save 8 VM's of in totaal 500GB. The
Any pointers around iSCSI performance focused on read speed? Did not find
much.
I have 2 x rz2 of 10x 3TB NL-SAS each in the pool. The OI server has 4
interfaces configured to the switch in LACP, mtu=9000. The switch (jumbo
enabled) shows all interfaces are active in the port channel. How can I
On Thursday, June 06, 2013 06:52 AM, Heinrich van Riel wrote:
Any pointers around iSCSI performance focused on read speed? Did not find
much.
I have 2 x rz2 of 10x 3TB NL-SAS each in the pool. The OI server has 4
interfaces configured to the switch in LACP, mtu=9000. The switch (jumbo
enabled)
On 2013-06-06 00:52, Heinrich van Riel wrote:
Any pointers around iSCSI performance focused on read speed? Did not find
much.
I have 2 x rz2 of 10x 3TB NL-SAS each in the pool. The OI server has 4
interfaces configured to the switch in LACP, mtu=9000. The switch (jumbo
enabled) shows all
18 matches
Mail list logo