Seems like this issue only occurs when MSI-X interrupts are enabled
for the BCM5709 chips, or am I reading it wrong?
If I type 'echo ::interrupts | mdb -k', and isolate for
network-related bits, I get the following output:
IRQ Vect IPL Bus Trg Type CPU Share APIC/INT# ISR(s)
36
On Tue, Apr 06, 2010 at 05:22:25PM -0700, Carson Gaspar wrote:
I just found an 8 GB SATA Zeus (Z4S28I) for £83.35 (~US$127) shipped to
California. That should be more than large enough for my ZIL @home,
based on zilstat.
Transcend sells an 8 GByte SLC SSD for about 70 EUR. The specs
are not
Hi list,
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't be until
late next week though.
Running OSOL nv130. Power off the machine, removed the F20 and power back on.
Machines boots OK and comes up normally with the
Hi list,
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't be until
late next week though.
Running OSOL nv130. Power off the machine, removed the F20 and power back on.
Machines boots OK and comes up normally with the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeroen Roodhart
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't be until
late next week though.
Running
On 7 apr 2010, at 14.28, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeroen Roodhart
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't
On 07/04/2010 13:58, Ragnar Sundblad wrote:
Rather: ...=19 would be ... if you don't mind loosing data written
the ~30 seconds before the crash, you don't have to mirror your log
device.
For a file server, mail server, etc etc, where things are stored
and supposed to be available later, you
On Wed, 7 Apr 2010, Ragnar Sundblad wrote:
So the recommendation for zpool 19 would be *strongly* recommended. Mirror
your log device if you care about using your pool.
And the recommendation for zpool =19 would be ... don't mirror your log
device. If you have more than one, just add them
On 07/04/2010 15:35, Bob Friesenhahn wrote:
On Wed, 7 Apr 2010, Ragnar Sundblad wrote:
So the recommendation for zpool 19 would be *strongly*
recommended. Mirror
your log device if you care about using your pool.
And the recommendation for zpool =19 would be ... don't mirror your
log
On Wed, 7 Apr 2010, Robert Milkowski wrote:
it is only read at boot if there are uncomitted data on it - during normal
reboots zfs won't read data from slog.
How does zfs know if there is uncomitted data on the slog device
without reading it? The minimal read would be quite small, but it
I finally decided to get rid of my Windows XP partition as I rarely used it
except to fire it up to install OS updates and virus signatures. I had some
trouble locating information on how to do this so I thought I'd document it
here.
My system is Toshiba Tecra M9. It had four partitions on
On 04/07/10 09:19, Bob Friesenhahn wrote:
On Wed, 7 Apr 2010, Robert Milkowski wrote:
it is only read at boot if there are uncomitted data on it - during
normal reboots zfs won't read data from slog.
How does zfs know if there is uncomitted data on the slog device
without reading it? The
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Rather: ... =19 would be ... if you don't mind loosing data written
the ~30 seconds before the crash, you don't have to mirror your log
device.
If you have a system crash, *and* a failed log device at the same time, this
is an important
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
It is also worth pointing out that in normal operation the slog is
essentially a write-only device which is only read at boot time. The
writes are assumed to work if the
On 07.04.2010 18:05, Ron Marshall wrote:
I finally decided to get rid of my Windows XP partition as I rarely used it
except to fire it up to install OS updates and virus signatures. I had some
trouble locating information on how to do this so I thought I'd document it
here.
My system is
On 04/07/10 10:18, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
It is also worth pointing out that in normal operation the slog is
essentially a write-only device which is only read at boot time.
On Wed, 7 Apr 2010, Neil Perrin wrote:
There have previously been suggestions to read slogs periodically. I
don't know if there's a CR raised for this though.
Roch wrote up CR 6938883 Need to exercise read from slog dynamically
Regards,
markm
___
On Wed, 7 Apr 2010, Edward Ned Harvey wrote:
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Rather: ... =19 would be ... if you don't mind loosing data written
the ~30 seconds before the crash, you don't have to mirror your log
device.
If you have a system crash, *and* a failed log device
On Wed, 7 Apr 2010, Edward Ned Harvey wrote:
BTW, does the system *ever* read from the log device during normal
operation? Such as perhaps during a scrub? It really would be nice to
detect failure of log devices in advance, that are claiming to write
correctly, but which are really
I have been searching this forum and just about every ZFS document i can find
trying to find the answer to my questions. But i believe the answer i am
looking for is not going to be documented and is probably best learned from
experience.
This is my first time playing around with open solaris
On Apr 7, 2010, at 10:19 AM, Bob Friesenhahn wrote:
On Wed, 7 Apr 2010, Edward Ned Harvey wrote:
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Rather: ... =19 would be ... if you don't mind loosing data written
the ~30 seconds before the crash, you don't have to mirror your log
device.
Daniel,
Which Solaris release is this?
I can't reproduce this on my lab system that runs the Solaris 10 10/09
release.
See the output below.
Thanks,
Cindy
# zfs destroy -r tank/test
# zfs create -o compression=gzip tank/test
# zfs snapshot tank/t...@now
# zfs send -R tank/t...@now | zfs
On Wed, 7 Apr 2010, Jason S wrote:
To keep the pool size at 12TB i would have to give up my extra
parity drive going to this 2 array setup and it is concerning as i
have no room for hot spares in this system. So in my mind i am left
with only one other choice and this is going to 2XRaidZ2
On Wed, 2010-04-07 at 10:40 -0700, Jason S wrote:
I have been searching this forum and just about every ZFS document i can find
trying to find the answer to my questions. But i believe the answer i am
looking for is not going to be documented and is probably best learned from
experience.
jr == Jeroen Roodhart j.r.roodh...@uva.nl writes:
jr Running OSOL nv130. Power off the machine, removed the F20 and
jr power back on. Machines boots OK and comes up normally with
jr the following message in 'zpool status':
yeah, but try it again and this time put rpool on the F20 as
On Wed, 7 Apr 2010, Erik Trimble wrote:
One thing Richard or Bob might be able to answer better is the tradeoff
between getting a cheap/small SSD for L2ARC and buying more RAM. That
is, I don't have a good feel for whether (for your normal usage case),
it would be better to get 8GB of more RAM,
Thank you for the replies guys!
I was actually already planning to get another 4 gigs of ram for the box right
away anyway, but thank you for mentioning it! As there appears to be a couple
ways to skin the cat here i think i am going to try both a 14 spindle RaidZ2
and 2 X 7 RaidZ2
On 7 apr 2010, at 18.13, Edward Ned Harvey wrote:
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Rather: ... =19 would be ... if you don't mind loosing data written
the ~30 seconds before the crash, you don't have to mirror your log
device.
If you have a system crash, *and* a failed
Hello,
More for my own edification than to help Jason (sorry Jason!) I would like to
clarify something. If read performance is paramount, am I correct in thinking
RAIDZ is not the best way to go? Would not the ZFS equivalent of RAID 10
(striped mirror sets) offer better read performance? In
GreenBytes (USA) sells OpenSolaris based storage appliances
Web site: www.getgreenbytes.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 04/ 7/10 03:09 PM, Jason S wrote:
I was actually already planning to get another 4 gigs of ram for the
box right away anyway, but thank you for mentioning it! As there
appears to be a couple ways to skin the cat here i think i am going
to try both a 14 spindle RaidZ2 and 2 X 7 RaidZ2
On Wed, 7 Apr 2010, Chris Dunbar wrote:
More for my own edification than to help Jason (sorry Jason!) I
would like to clarify something. If read performance is paramount,
am I correct in thinking RAIDZ is not the best way to go? Would not
the ZFS equivalent of RAID 10 (striped mirror sets)
On Wed, Apr 7, 2010 at 12:09 PM, Jason S j.sin...@shaw.ca wrote:
I was actually already planning to get another 4 gigs of ram for the box
right away anyway, but thank you for mentioning it! As there appears to be a
couple ways to skin the cat here i think i am going to try both a 14
spindle
On Wed, Apr 7, 2010 at 12:29 PM, Frank Middleton
f.middle...@apogeect.comwrote:
On 04/ 7/10 03:09 PM, Jason S wrote:
I was actually already planning to get another 4 gigs of ram for the
box right away anyway, but thank you for mentioning it! As there
appears to be a couple ways to skin the
Ahh,
Thank you for the reply Bob, that is the info i was after. It looks like i will
be going with the 2 X 7 RaidZ2 option.
And just to clarify as far as expanding this pool in the future my only option
is to add another 7 spindle RaidZ2 array correct?
Thanks for all the help guys !
--
This
On Wed, Apr 7, 2010 at 2:20 PM, Jeremy Archer j4rc...@gmail.com wrote:
GreenBytes (USA) sells OpenSolaris based storage appliances
Web site: www.getgreenbytes.com
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Unless something has changed recently, they were using their own
I am booting from a single 74gig WD raptor attached to the motherboards onboard
SATA port.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Apr 7 at 12:41, Jason S wrote:
And just to clarify as far as expanding this pool in the future my
only option is to add another 7 spindle RaidZ2 array correct?
That is correct, unless you want to use the -f option to force-allow
an asymmetric expansion of your pool.
--eric
--
Eric
On Wed, 2010-04-07 at 12:41 -0700, Jason S wrote:
Ahh,
Thank you for the reply Bob, that is the info i was after. It looks like i
will be going with the 2 X 7 RaidZ2 option.
And just to clarify as far as expanding this pool in the future my only
option is to add another 7 spindle
I worked around the problem by first creating a filesystem of the same name
with compression=gzip on the target server. Like this:
zfs create sas/archive
zfs set compression=gzip sas/archive
Then I used zfs receive with the -F option:
zfs send -vR promise1/arch...@daily.1 | zfs send zfs receive
Freddie,
now you have brought up another question :) I had always assumed that i would
just used open solaris for this file server build, as i had not actually done
any research in regards to other operatin systems that support ZFS. Does anyone
have any advice as to wether i should be
The receive side is running build 111b (2009.06), so I'm not sure if your
advice actually applies to my situation.
Daniel Bakken
On Tue, Apr 6, 2010 at 10:57 PM, Tom Erickson thomas.erick...@oracle.comwrote:
After build 128, locally set properties override received properties, and
this
I just bought a new set of disks, and want to move my primary data
store over to the new disks. I created a new pool fine, and now I'm
trying to use zfs send -R | zfs receive to transfer the data. Here's
the error I got:
$ pfexec zfs send -Rpv h...@next | pfexec zfs receive -duvF temp
sending
On Wed, Apr 7, 2010 at 1:22 PM, Jason S j.sin...@shaw.ca wrote:
now you have brought up another question :) I had always assumed that i
would just used open solaris for this file server build, as i had not
actually done any research in regards to other operatin systems that support
ZFS. Does
On Wed, 7 Apr 2010, Jason S wrote:
systems that support ZFS. Does anyone have any advice as to wether i
should be considering FreeBSD instead of Open Solaris? Both
operating systems are somewhat foriegn to me as i come from the
FreeBSD zfs does clearly work, although it is an older version
Since i already have Open Solaris installed on the box, i probably wont jump
over to FreeBSD. However someone has suggested to me to look into
www.nexenta.org and i must say it is quite interesting. Someone correct me if i
am wrong but it looks like it is Open Solaris based and has basically
On Wed, Apr 7, 2010 at 1:32 PM, Will Murnane will.murn...@gmail.com wrote:
This process took about 12 hours to do, so it's frustrating that
(apparently) snapshots disappearing causes the replication to fail.
Perhaps some sort of locking should be implemented to prevent
snapshots that will be
On Wed, Apr 7, 2010 at 17:51, Brandon High bh...@freaks.com wrote:
On Wed, Apr 7, 2010 at 1:32 PM, Will Murnane will.murn...@gmail.com wrote:
This process took about 12 hours to do, so it's frustrating that
(apparently) snapshots disappearing causes the replication to fail.
Perhaps some sort
On Wednesday, April 7, 2010, Jason S j.sin...@shaw.ca wrote:
Since i already have Open Solaris installed on the box, i probably wont jump
over to FreeBSD. However someone has suggested to me to look into
www.nexenta.org and i must say it is quite interesting. Someone correct me if
i am
On Apr 7, 2010, at 16:47, Bob Friesenhahn wrote:
Solaris 10's Live Upgrade (and the OpenSolaris equivalent) is quite
valuable in that it allows you to upgrade the OS without more than a
few minutes of down-time and with a quick fall-back if things don't
work as expected.
It is more
Here is the info from zstreamdump -v on the sending side:
BEGIN record
hdrtype = 2
features = 0
magic = 2f5bacbac
creation_time = 0
type = 0
flags = 0x0
toguid = 0
fromguid = 0
toname = promise1/arch...@daily.1
nvlist
On Apr 7, 2010, at 3:24 PM, Tim Cook wrote:
On Wednesday, April 7, 2010, Jason S j.sin...@shaw.ca wrote:
Since i already have Open Solaris installed on the box, i probably wont jump
over to FreeBSD. However someone has suggested to me to look into
www.nexenta.org and i must say it is quite
On Wed, 7 Apr 2010, David Magda wrote:
It is more straightforward to update a FreeBSD install from source code
because that is the way it is normally delivered. Sometimes this is useful
in order to incorporate a fix as soon as possible without needing to wait
for someone to produce
On Wed, Apr 7, 2010 at 4:27 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Wed, 7 Apr 2010, David Magda wrote:
It is more straightforward to update a FreeBSD install from source code
because that is the way it is normally delivered. Sometimes this is useful
in order to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Chris Dunbar
like to clarify something. If read performance is paramount, am I
correct in thinking RAIDZ is not the best way to go? Would not the ZFS
equivalent of RAID 10 (striped mirror
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
If you're going to go with (Open)Solaris, the OP may also want to look
into the multi-platform pkgsrc for third-party open source software:
http://www.pkgsrc.org/
We have found the problem. The mountpoint property on the sender was at one
time changed from the default, then later changed back to defaults using zfs
set instead of zfs inherit. Therefore, zfs send included these local
non-default properties in the stream, even though the local properties are
Daniel Bakken wrote:
When I send a filesystem with compression=gzip to another server with
compression=on, compression=gzip is not set on the received filesystem.
I am using:
zfs send -R promise1/arch...@daily.1 | zfs receive -vd sas
The zfs manpage says regarding the -R flag: When received,
Daniel Bakken wrote:
The receive side is running build 111b (2009.06), so I'm not sure if
your advice actually applies to my situation.
The advice regarding received vs local properties definitely does not
apply. You could still confirm the presence of the compression property
in the send
On Apr 7, 2010, at 5:06 PM, Will Murnane wrote:
This is on b134:
$ pfexec pkg image-update
No updates available for this image.
There is a zfs hold command available, but checking for holds on the
snapshot I'm trying to send (I started it again, to see if disabling
automatic snapshots
Daniel Bakken wrote:
Here is the info from zstreamdump -v on the sending side:
BEGIN record
hdrtype = 2
features = 0
magic = 2f5bacbac
creation_time = 0
type = 0
flags = 0x0
toguid = 0
fromguid = 0
toname =
Daniel Bakken wrote:
We have found the problem. The mountpoint property on the sender was at
one time changed from the default, then later changed back to defaults
using zfs set instead of zfs inherit. Therefore, zfs send included these
local non-default properties in the stream, even though
On Apr 7, 2010, at 19:58, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
If you're going to go with (Open)Solaris, the OP may also want to
look
into the multi-platform pkgsrc for third-party open
On Wed, Apr 7, 2010 at 5:59 PM, Richard Elling richard.ell...@gmail.comwrote:
On Apr 7, 2010, at 3:24 PM, Tim Cook wrote:
On Wednesday, April 7, 2010, Jason S j.sin...@shaw.ca wrote:
Since i already have Open Solaris installed on the box, i probably wont
jump over to FreeBSD. However
On Wed, Apr 7, 2010 at 4:58 PM, Edward Ned Harvey solar...@nedharvey.comwrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
If you're going to go with (Open)Solaris, the OP may also want to look
into the multi-platform
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones. You will run out of space
eventually regardless; this way you can replace 7 at a time, not 14 at
a time. With luck, each replacement will last you long enough that
the next replacement will
66 matches
Mail list logo