Hi.
2010/9/19 R.G. Keen k...@geofex.com
and last-generation hardware is very, very cheap.
Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM. And those cheapo last-gen hardware boxes quite often
Mattias Pantzare wrote:
On Wed, Sep 22, 2010 at 20:15, Markus Kovero markus.kov...@nebula.fi wrote:
Such configuration was known to cause deadlocks. Even if it works now (which I
don't expect to be the case) it will make your data to be cached twice. The CPU
utilization will also be
Erik Trimble wrote:
On 9/22/2010 11:15 AM, Markus Kovero wrote:
Such configuration was known to cause deadlocks. Even if it works
now (which I don't expect to be the case) it will make your data to
be cached twice. The CPU utilization will also be much higher, etc.
All in all I strongly
Isn't this a matter of not keeping enough free memory as a workspace? By
free memory, I am referring to unallocated memory and also recoverable main
memory used for shrinkable read caches (shrinkable by discarding cached
data). If the system keeps enough free and recoverable memory
What is an example of where a checksummed outside pool would not be able
to protect a non-checksummed inside pool? Would an intermittent
RAM/motherboard/CPU failure that only corrupted the inner pool's block
before it was passed to the outer pool (and did not corrupt the outer
pool's
On 09/23/10 06:33 PM, Alexander Skwar wrote:
Hi.
2010/9/19 R.G. Keenk...@geofex.com
and last-generation hardware is very, very cheap.
Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM.
On 09/23/10 05:00 PM, Carl Brewer wrote:
G'day,
My OpenSolaris (b134) box is low on space and has a ZFS mirror for root :
uname -a
SunOS wattage 5.11 snv_134 i86pc i386 i86pc
rpool 696G 639G 56.7G91% 1.09x ONLINE -
It's currently a pair of 750GB drives. In my bag I have a
Markus Kovero wrote:
What is an example of where a checksummed outside pool would not be able
to protect a non-checksummed inside pool? Would an intermittent
RAM/motherboard/CPU failure that only corrupted the inner pool's block
before it was passed to the outer pool (and did not corrupt the
On Thu, Sep 23, 2010 at 08:48, Haudy Kazemi kaze0...@umn.edu wrote:
Mattias Pantzare wrote:
ZFS needs free memory for writes. If you fill your memory with dirty
data zfs has to flush that data to disk. If that disk is a virtual
disk in zfs on the same computer those writes need more memory
I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
Note that this is not different from using another OS; the difference is
that ZFS will complain when memory leads to disk corruption; without ZFS
you will still have memory corruption but you wouldn't know.
Is it helpful not
On 23-9-2010 10:25, casper@sun.com wrote:
I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
I'm using ZFS on a non-ECC machine for years now without any issues.
Never had errors. Plus, like others said, other OS'ses have the same
problems and also run quite well. If not,
On 23-9-2010 10:25, casper@sun.com wrote:
I'm using ZFS on a system w/o ECC; it works (it's an Atom 230).
I'm using ZFS on a non-ECC machine for years now without any issues.
Never had errors. Plus, like others said, other OS'ses have the same
problems and also run quite well. If not,
Ok, that doesn't seem to have worked so well ...
I took one of the drives offline, rebooted and it just hangs at the splash
screen after prompting for which BE to boot into.
It gets to
hostname: blah
and just sits there.
Um ...
I read some doco that says :
The boot process can be slow if
it is responding to pings, btw, so *something's* running. Not ssh though
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
swapping the boot order in the PC's BIOS doesn't help
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, that doesn't seem to have worked so well ...
I took one of the drives offline, rebooted and it just hangs at the
splash screen after prompting for which BE to boot into.
It gets to
hostname: blah
and just sits there.
When you say offline, did you:
- remove the drive
On 23/09/2010 11:06 PM, casper@sun.com wrote:
Ok, that doesn't seem to have worked so well ...
I took one of the drives offline, rebooted and it just hangs at the
splash screen after prompting for which BE to boot into.
It gets to
hostname: blah
and just sits there.
When you say
On 09/23/10 03:01, Ian Collins wrote:
So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it safe enough now do use ZFS on
non-ECC-RAM systems (if backups are around)?
It's as safe as running any other OS.
The big difference is ZFS will tell
On Wed, Sep 22, 2010 at 8:13 PM, Richard Elling rich...@nexenta.com wrote:
On Sep 22, 2010, at 1:46 PM, LIC mesh wrote:
Something else is probably causing the slow I/O. What is the output of
iostat -en ? The best answer is all balls (balls == zeros)
Found a number of LUNs with errors
On Tue, Sep 21, 2010 at 05:48:09PM +0200, Alexander Skwar wrote:
We're using ZFS via iSCSI on a S10U8 system. As the ZFS Best
Practices Guide http://j.mp/zfs-bp states, it's advisable to use
redundancy (ie. RAIDZ, mirroring or whatnot), even if the underlying
storage does its own RAID thing.
@ kebabber:
There was a guy doing that: Windows as host and
OpenSolaris as guest with raw access to his disks. He
lost his 12 TB data. It turned out that VirtualBox
dont honor the write flush flag (or something
similar).
That story is in the link I provided, and as has been pointed out
On 23-9-2010 16:34, Frank Middleton wrote:
For home use, used Suns are available at ridiculously low prices and
they seem to be much better engineered than your typical PC. Memory
failures are much more likely than winning the pick 6 lotto...
And about what SUN systems are you thinking
On Sep 23, 2010, at 9:08 AM, Dick Hoogendijk wrote:
On 23-9-2010 16:34, Frank Middleton wrote:
For home use, used Suns are
available at ridiculously low prices and
they seem to be much better engineered than your typical PC.
Memory
failures are much more
On Thu, September 23, 2010 01:33, Alexander Skwar wrote:
Hi.
2010/9/19 R.G. Keen k...@geofex.com
and last-generation hardware is very, very cheap.
Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have
I should clarify. I was addressing just the issue of
virtualizing, not what the complete set of things to
do to prevent data loss is.
2010/9/19 R.G. Keen k...@geofex.com
and last-generation hardware is very, very cheap.
Yes, of course, it is. But, actually, is that a true
statement?
Yes,
On 23/09/2010 11:06 PM, casper@sun.com wrote:
Ok, that doesn't seem to have worked so well ...
I took one of the drives offline, rebooted and it
just hangs at the
splash screen after prompting for which BE to boot
into.
It gets to
hostname: blah
and just sits there.
[I'm deleting the whole thread, since this is a rehash of several
discussions on this list previously - check out the archives, and search
for ECC RAM]
These days, for a home server, you really have only one choice to make:
How much power do I care that this thing uses?
If you are
On Thu, Sep 23, 2010 at 06:58:29AM +, Markus Kovero wrote:
What is an example of where a checksummed outside pool would not be able
to protect a non-checksummed inside pool? Would an intermittent
RAM/motherboard/CPU failure that only corrupted the inner pool's block
before it was
On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then there's
absolutely
|no excuse not to buy a USED server-class machine which is 1- or 2-
|generations back. They're dirt cheap, readily available,
| [snip]
=
Anyone have
So, I'm still having problems with intermittent hangs on write with my ZFS
pool. Details from my original post are below. Since posting that, I've gone
back and forth with a number of you, and gotten a lot of useful advice, but I'm
still trying to get to the root of the problem so I can
Hi!
2010/9/23 Gary Mills mi...@cc.umanitoba.ca
On Tue, Sep 21, 2010 at 05:48:09PM +0200, Alexander Skwar wrote:
We're using ZFS via iSCSI on a S10U8 system. As the ZFS Best
Practices Guide http://j.mp/zfs-bp states, it's advisable to use
redundancy (ie. RAIDZ, mirroring or whatnot),
Folks,
I am a bit confused on the dedup relationship between the filesystem and its
pool.
The dedup property is set on a filesystem, not on the pool.
However, the dedup ratio is reported on the pool and not on the filesystem.
Why is it this way?
Thank you in advance for your help.
Regards,
Bumping this because no one responded. Could this be because
it's such a stupid question no one wants to stoop to answering it,
or because no one knows the answer? Trying to picture, say, what
could happen in /var (say /var/adm/messages), let alone a swap
zvol, is giving me a headache...
On
On 09/23/10 15:36, Peter Taps wrote:
I am a bit confused on the dedup relationship between the filesystem and its
pool.
The dedup property is set on a filesystem, not on the pool.
Dedup is a pool wide concept, blocks from multiple filesystems
maybe deduplicated.
However, the dedup ratio is
I believe it goes a something like this -
ZPS filesystems with dedupe turned on can be thought of as hippie/socialist
filesystems, wanting to share, etc. Filesystems with dedupe turned off are
a grey Randian landscape where sharing blocks between files is seen as a
weakness/defect. They all
On 2010-Sep-24 00:58:47 +0800, R.G. Keen k...@geofex.com wrote:
That may not be the best of all possible things to do
on a number of levels. But for me, the likelihood of
making a setup or operating mistake in a virtual machine
setup server is far outweighs the hardware cost to put
another
Hi Peter,
dedupe is pool wide. File systems can opt in or out of dedupe. So if multiple
file systems are set to dedupe, then they all benefit from using the same pool
of deduped blocks. In this way, if two files share some of the same blocks,
even if they are in different file systems, they
Hi Charles,
There are quite a few bugs in b134 that can lead to this. Alas, due to the new
regime, there was a period of time where the distributions were not being
delivered. If I were in your shoes, I would upgrade to OpenIndiana b147 which
has 26 weeks of maturity and bug fixes over b134.
On 09/23/10 04:40 PM, Frank Middleton wrote:
Bumping this because no one responded. Could this be because
it's such a stupid question no one wants to stoop to answering it,
or because no one knows the answer? Trying to picture, say, what
could happen in /var (say /var/adm/messages), let alone a
On Sep 23, 2010, at 3:40 PM, Frank Middleton wrote:
Bumping this because no one responded. Could this be because
it's such a stupid question no one wants to stoop to answering it,
or because no one knows the answer? Trying to picture, say, what
could happen in /var (say /var/adm/messages),
Timing is everything. Lori is the authoritative answer and makes sense, due
to the limitations at boot. Thanks Lori! :-)
-- richard
--
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com
Richard Elling
rich...@nexenta.com +1-760-896-4422
Enterprise
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
The dedup property is set on a filesystem, not on the pool.
However, the dedup ratio is reported on the pool and not on the
filesystem.
As with most other ZFS concepts, the
Have you tried setting zfs_recover aok in /etc/system or setting it
with the mdb?
Read how to set via /etc/system
http://opensolaris.org/jive/thread.jspa?threadID=114906
mdb debugger
On 2010-Sep-24 00:58:47 +0800, R.G. Keen
k...@geofex.com wrote:
But for me, the likelihood of
making a setup or operating mistake in a virtual machine
setup server is far outweighs the hardware cost to put
another physical machine on the ground.
The downsides are generally that it'll
44 matches
Mail list logo