t;c...@innovates.com>
Sent: Wednesday, October 14, 2015 2:44 PM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] ZIL TXG commits happen very frequently - why?
It all has to do with the write throttle and buffers filling. Here's a great
blog post on how it
Hi all.
Wondering if anyone could shed some light on why my ZFS pool would perform TXG
commits up to 5 times per second. It's set to the default 5 second interval and
occasionally it does wait 5 seconds between commits, but only when nearly idle.
I'm not sure if this impacts my performance but
Same issue here around two months ago when a L2arc device failed… failmode was
default and the device was actually an mSata SSD mounted in a PCI-E mSata card:
http://www.addonics.com/products/ad4mspx2.php and the disk was one of four of
these
root@zfs10:/root# uname -a
SunOS zfs10 5.11 omnios-10b9c79 i86pc i386 i86pc
Any idea how I can troubleshoot further?
br,
Rune
From: Dan McDonald dan...@omniti.com
Sent: Monday, April 20, 2015 3:58 AM
To: Rune Tipsmark
Cc: omnios-discuss; Dan McDonald
hi guys,
my omnios zfs server crashed today and I got a complete core dump and I was
wondering if I am on the right track...
here is what I did so far...
root@zfs10:/rootmailto:root@zfs10:/root# fmdump -Vp -u
775e0fc1-dcd2-4cb2-b800-88a1b9910f94
TIME UUID
about it the more I lean towards SM having an issue... and
Dell uses essentially SM so same same.
br,
Rune
From: Johan Kragsterman johan.kragster...@capvert.se
Sent: Saturday, March 7, 2015 4:24 PM
To: Rune Tipsmark
Cc: 'Nate Smith'; 'Richard Elling
No idea to be honest, even if there is its scary if it can cause these kinds of
problems…
Br,
Rune
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf
Of Nate Smith
Sent: Friday, March 06, 2015 8:57 AM
To: 'Richard Elling'
Cc: omnios-discuss@lists.omniti.com
Subject:
-Original Message-
From: Johan Kragsterman [mailto:johan.kragster...@capvert.se]
Sent: Thursday, March 05, 2015 12:12 PM
To: Rune Tipsmark
Cc: 'Nate Smith'; omnios-discuss@lists.omniti.com
Subject: Ang: RE: RE: Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?
-Rune Tipsmark
To: Rune Tipsmark
Cc: 'Nate Smith'; omnios-discuss@lists.omniti.com
Subject: Ang: RE: Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?
Hi!
-Rune Tipsmark r...@steait.net skrev: -
Till: 'Johan Kragsterman' johan.kragster...@capvert.se
Från: Rune Tipsmark r...@steait.net
Datum: 2015-03
: Thursday, March 05, 2015 8:10 AM
To: Rune Tipsmark; omnios-discuss@lists.omniti.com
Subject: RE: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?
Do you see the same problem with Windows and iSCSI as an initiator? I wish
there was a way to turn up debugging to figure this out.
From: Rune Tipsmark
Same problem here… have noticed I can cause this easily by using Windows as
initiator… I cannot cause this using VMware as initiator…
No idea how to fix, but a big problem.
Br,
Rune
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf
Of Nate Smith
Sent: Thursday,
Pls see below
-Original Message-
From: Johan Kragsterman [mailto:johan.kragster...@capvert.se]
Sent: Thursday, March 05, 2015 9:00 AM
To: Rune Tipsmark
Cc: 'Nate Smith'; omnios-discuss@lists.omniti.com
Subject: Ang: Re: [OmniOS-discuss] QLE2652 I/O Disconnect. Heat Sinks?
Hi
hi all,
I found an entry about zil_slog_limit here:
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZILII
it basically explains how writes larger than 1MB per default hits the main pool
rather than my Slog device - I could not find much further information nor the
equivalent
From: Richard Elling richard.ell...@richardelling.com
Sent: Thursday, February 19, 2015 1:27 AM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] ZFS Slog - force all writes to go to Slog
On Feb 18, 2015, at 12:04 PM, Rune
hi all,
I got some major problems... when using Windows and Fibre Channel I am able to
kill my ZFS box totally for at least 15 minutes... it simply drops all
connections to all hosts connected via FC. This happens under load, for example
doing backups writing to the ZFS, running IO Meter
hi all, I am just writing some scripts to gather performance data from
iostat... or at least trying... I would like to completely skip the first
output since boot from iostat output and just get right to the period I
specified with the data current from that period. Is this possible at all?
last $varInterval seconds;
echo 0 disk_latency_${tokens[$i]} ms=${tokens[$i-3]} ${tokens[$i-3]} ms
response time average last $varInterval seconds;
done
From: OmniOS-discuss omnios-discuss-boun...@lists.omniti.com on behalf of
Rune Tipsmark r...@steait.net
Sent
I would be able to help test if its stable in my environment as well. I can't
program though.
br,
Rune
From: OmniOS-discuss omnios-discuss-boun...@lists.omniti.com on behalf of W
Verb wver...@gmail.com
Sent: Tuesday, January 20, 2015 3:59 AM
To:
hi all,
just in case there are other people out there using their ZFS box against
vSphere 5.1 or later... I found my storage vmotion were slow... really slow...
not much info available and so after a while of trial and error I found a nice
combo that works very well in terms of performance,
I found out the feature is already enabled, I guess destroying very large
snapshots just takes a very long time regardless...
br,
Rune
From: OmniOS-discuss omnios-discuss-boun...@lists.omniti.com on behalf of
Rune Tipsmark r...@steait.net
Sent: Monday
,
Rune
From: Dan McDonald dan...@omniti.com
Sent: Monday, December 22, 2014 8:07 PM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] mount/create volume lu from snapshot
On Dec 22, 2014, at 1:59 PM, Rune Tipsmark r
where do you check that?
br,
Rune
From: Mark mark0...@gmail.com
Sent: Monday, December 15, 2014 7:19 AM
To: Rune Tipsmark; omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] Fibre Target problems
On 15/12/2014 4:44 a.m., Rune Tipsmark wrote
hi all,
got a new system I was intending on using as backup repository. Whenever dedup
is enabled it dies after anywhere between 5 and 30 minutes. I need to reboot
OmniOS to get it back online.
the files being copied onto the zfs vols are rather large, about ~2TB each...
if I copy smaller
hi all,
All my vSphere (ESXi5.1) hosts experience a big spike in latency every hour or
so.
I tested on Infiniband iSER and SRP and also 4Gbit FC and 8GBit FC. All exhibit
the same behavior so I don't think its the connection that is causing this.
When I modify the arc_shrink_shift 10 (192GB
Infiniband as well... I am leaning towards something with
the SuperMicro hardware but can't really pinpoint it.
br,
Rune
From: Dan McDonald dan...@omniti.com
Sent: Thursday, December 11, 2014 11:39 PM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject
still same... output can be seen here:
http://i.imgur.com/BuwaGGn.png
From: Dan McDonald dan...@omniti.com
Sent: Thursday, December 11, 2014 11:39 PM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] hangs on reboot
Nothing
...@acm.org
Sent: Thursday, December 11, 2014 11:32 PM
To: Rune Tipsmark; omnios-discuss@lists.omniti.com
Subject: RE: [OmniOS-discuss] hangs on reboot
Rune Tipsmark
Sent: Thursday, December 11, 2014 2:26 PM
I got a bunch (3) installations of omnios on SuperMicro hardware and all 3
have issues
Hi guys,
Does anyone know if Active/Active and Round Robin is supported from vSphere
towards OmniOS ZFS on Fiber Channel?
Br
Rune
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
I moved one of my PCI-E IOdrives and the disks changed from c14d0 and c15d0 to
c16d0 and c17d0
How do I change it back so I can get my pool back online?
32. c16d0 Unknown-Unknown-0001-298.02GB
/pci@79,0/pci8086,3c02@1/pci10b5,8616@0/pci10b5,8616@5/pci103c,178e@0
33. c17d0
AM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] need to change c17d0 to c15d0
Put the drive/hba back and then export your pool (zpool export pool-name). Then
move the drive/hba and import the pool (zpool import pool-name).
If putting the drive/hba back isn't
0 cannot open
-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Wednesday, November 19, 2014 7:00 AM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] need to change c17d0 to c15d0
On Wed, 19 Nov 2014, Rune
Zpool destroy pool02 and then zpool import -f pool02 worked.
Br,
Rune
-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Wednesday, November 19, 2014 7:24 AM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: RE: [OmniOS-discuss] need
Well that sucks... I guess one more reason to move to NV-Dimms to replace slow
SLC cards.
Br,
Rune
-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Friday, November 14, 2014 6:48 AM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re
,
Rune
-Original Message-
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf
Of Rune Tipsmark
Sent: Friday, November 14, 2014 9:47 AM
To: Bob Friesenhahn
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] slog limits write speed more than
stripes and get the speed I
want.
Copying ~60GB from one LUN to another on same ZFS box.
Sync=Always:
[cid:image001.png@01D0001C.7E3289F0]
Sync=Disabled
[cid:image002.png@01D0001C.7E3289F0]
-Original Message-
From: Rune Tipsmark
Sent: Friday, November 14, 2014 11:53 AM
To: Rune
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] infiniband
On Mon, 10 Nov 2014 04:38:23 +
Rune Tipsmark r...@steait.net wrote:
What network throughput were you looking at before the tweaking?
Raised my performance from 5.2 gbps to 7.9 gbps (50% performance
increase)
--
Hilsen
http://www.fusionio.com/products/iodrive
160GB SLC
-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Wednesday, November 12, 2014 3:17 PM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] slog limits write speed more
-boun...@lists.omniti.com] On Behalf
Of Michael Rasmussen
Sent: Wednesday, November 12, 2014 3:48 PM
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] infiniband
How exactly do you configure this?
Is a switch required?
On Wed, 12 Nov 2014 23:19:24 +
Rune Tipsmark r...@steait.net
Subject: Re: [OmniOS-discuss] infiniband
On Thu, 13 Nov 2014 00:22:40 +
Rune Tipsmark r...@steait.net wrote:
ipadm create-addr -T static -a 10.98.0.10 p.ibp0/ipv4 ipadm
create-addr -T static -a 10.99.0.10 p.ibp1/ipv4 ipadm create-addr
-T static -a 10.98.0.12 p.ibp2/ipv4
unsupported options?
Br,
Rune
-Original Message-
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf
Of Rune Tipsmark
Sent: Friday, October 10, 2014 1:58 PM
To: Richard Elling
Cc: omnios-discuss
Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing
-discuss-boun...@lists.omniti.com] On Behalf
Of Michael Rasmussen
Sent: Monday, November 10, 2014 3:47 PM
To: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] No space left on device - upgrade failed
On Mon, 10 Nov 2014 23:32:14 +
Rune Tipsmark r...@steait.net wrote:
root@zfs00
What network throughput were you looking at before the tweaking?
Br,
Rune
-Original Message-
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf
Of Michael Rasmussen
Sent: Sunday, November 09, 2014 5:21 PM
To: omnios-discuss@lists.omniti.com
Subject: Re:
That was a secondary thought, maybe worth testing one day.
Primarily I was looking at a way of speeding up zfs send-recv.
Guess it's a no go on a single HCA...
From: Johan Kragsterman [mailto:johan.kragster...@capvert.se]
Sent: Monday, November 03, 2014 12:41 AM
To: Rune Tipsmark
Cc: omnios
://www.ssec.wisc.edu/~scottn/Lustre_ZFS_notes/lustre_zfs_srp_mirror.html
Br,
Rune
From: David Bomba [mailto:turbo...@gmail.com]
Sent: Saturday, November 01, 2014 6:01 PM
To: Rune Tipsmark
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled
):
-Original Message-
From: Johan Kragsterman [mailto:johan.kragster...@capvert.se]
Sent: Sunday, November 02, 2014 9:56 AM
To: Rune Tipsmark
Cc: David Bomba; omnios-discuss@lists.omniti.com
Subject: Ang: Re: [OmniOS-discuss] zfs send via SRP or other RDMA enabled
protocol
-OmniOS-discuss omnios
I know, but how do I initiate a session from ZFS10?
Br,
Rune
From: Johan Kragsterman [mailto:johan.kragster...@capvert.se]
Sent: Sunday, November 02, 2014 10:33 AM
To: Rune Tipsmark
Cc: David Bomba; omnios-discuss@lists.omniti.com
Subject: Ang: RE: Re: [OmniOS-discuss] zfs send via SRP or other
it at the same time at say 1.50 ratio, will the pool show 100 MB/sec and the
client write 75 MB/sec actual?
Br,
Rune
-Original Message-
From: Richard Elling [mailto:richard.ell...@richardelling.com]
Sent: Sunday, November 02, 2014 6:07 PM
To: Rune Tipsmark
Cc: Eric Sproul; omnios-discuss
connectX2 and drivers are loaded, both OmniOS servers have LUNs I can access
from both ESX and Windows... just the conection between them that I cant figure
out.
Br,
Rune
From: Johan Kragsterman [mailto:johan.kragster...@capvert.se]
Sent: Sunday, November 02, 2014 10:49 PM
To: Rune Tipsmark
Cc
Hi all,
Is it possible to do zfs send/recv via SRP or some other RMDA enabled protocol?
IPoIB is really slow, about 50 MB/sec between two boxes, no disks are more than
10-15% busy.
If not, is there a way I can aggregate say 8 or 16 IPoIB partitions and push
throughput to a more reasonable
Hi all,
Hope someone can help me get this pool running as it should, I am seeing
something like 200-300 MB/sec max which is much much less than I want to see...
11 mirrored vdevs... 2 spares and 2 SLOG devices, 192gb ram in host...
Why is this pool showing near 100% busy when the underlying
]
Sent: Friday, October 31, 2014 9:03 AM
To: Eric Sproul
Cc: Rune Tipsmark; omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs pool 100% busy, disks less than 10%
On Oct 31, 2014, at 7:14 AM, Eric Sproul eric.spr...@circonus.com wrote:
On Fri, Oct 31, 2014 at 2:33 AM, Rune Tipsmark r
-discuss-boun...@lists.omniti.com] On Behalf
Of Rune Tipsmark
Sent: Friday, October 31, 2014 12:38 PM
To: Richard Elling; Eric Sproul
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] zfs pool 100% busy, disks less than 10%
Ok, makes sense.
What other kind of indicators can I look
]
Sent: Friday, October 10, 2014 10:01 AM
To: Rune Tipsmark
Cc: Dan McDonald; omnios-discuss
Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all
files
On Oct 9, 2014, at 4:58 PM, Rune Tipsmark r...@steait.net wrote:
Just updated to latest version r151012
Still same... I
On OmniOS v11 r151010
-Original Message-
From: Dan McDonald [mailto:dan...@omniti.com]
Sent: Thursday, October 09, 2014 11:11 AM
To: Rune Tipsmark
Cc: Richard Elling; Filip Marvan; omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing
So if I just upgrade to latest it should be supported?
Rune
-Original Message-
From: Dan McDonald [mailto:dan...@omniti.com]
Sent: Thursday, October 09, 2014 11:37 AM
To: Rune Tipsmark
Cc: Richard Elling; Filip Marvan; omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss] ZFS
Is there a command I can run to check?
Rune
-Original Message-
From: Dan McDonald [mailto:dan...@omniti.com]
Sent: Thursday, October 09, 2014 11:51 AM
To: Rune Tipsmark
Cc: omnios-discuss
Subject: Re: [OmniOS-discuss] ZFS pool allocation remains after removing all
files
On Oct 9
vdev_mirror_shift = 0x15
zfs_vdev_aggregation_limit = 0x2
Rune
-Original Message-
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On Behalf
Of Rune Tipsmark
Sent: Thursday, October 09, 2014 3:33 PM
To: Dan McDonald
Cc: omnios-discuss
Subject: Re: [OmniOS-discuss] ZFS pool
the same, easy to reproduce.
Googled for ever to find anything, but nothing.
Does anyone have any idea? I don't really want to abandon ZFS just yet.
Venlig hilsen / Best regards,
Rune Tipsmark
___
OmniOS-discuss mailing list
OmniOS-discuss
58 matches
Mail list logo