Hi, I'm asking for opinions here, any possible disaster happening or
performance issues related in setup described below.
Point being to create large pool and smaller pools within where you can monitor
easily iops and bandwidth usage without using dtrace or similar techniques.
1. Create pool
#
Now this is a testament to the power of ZFS. Only ZFS is so sensitive it
observed these errors to you. Had you run another filesystem, you would never
got a notice that your data is slowly being corrupted by some faulty hardware.
:o)
--
This message posted from opensolaris.org
There was a guy doing that: Windows as host and OpenSolaris as guest with raw
access to his disks. He lost his 12 TB data. It turned out that VirtualBox dont
honor the write flush flag (or something similar).
In other words, I would never ever do that. Your data is safer with Windows
only and
Did you see this thread?
http://opensolaris.org/jive/thread.jspa?messageID=500659#500659
He had problems with ZFS. It turned out to be faulty RAM. ZFS is so sensitive
it detects and reports problems to you. No other filesystem does that, so you
think ZFS is problematic and switch. But the other
On Wed, Sep 22, 2010 at 07:14:43AM -0700, Orvar Korvar wrote:
There was a guy doing that: Windows as host and OpenSolaris as guest
with raw access to his disks. He lost his 12 TB data. It turned out
that VirtualBox dont honor the write flush flag (or something
similar).
VirtualBox has an
On Wed, Sep 22, 2010 at 02:06:27PM +, Markus Kovero wrote:
Hi, I'm asking for opinions here, any possible disaster happening or
performance issues related in setup described below.
Point being to create large pool and smaller pools within where you can
monitor easily iops and bandwidth
Hi all, I have a ZFS question related to COW and scope.
If user A is reading a file while user B is writing to the same file,
when do the changes introduced by user B become visible to everyone?
Is there a block level scope, or file level, or something else?
Thanks!
Folks,
While going through zpool source code, I see a configuration option called
l2cache. What is this option for? It doesn't seem to be documented.
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
Such configuration was known to cause deadlocks. Even if it works now (which
I don't expect to be the case) it will make your data to be cached twice. The
CPU utilization will also be much higher, etc.
All in all I strongly recommend against such setup.
--
Pawel Jakub Dawidek
On 09/22/10 11:22, Moazam Raja wrote:
Hi all, I have a ZFS question related to COW and scope.
If user A is reading a file while user B is writing to the same file,
when do the changes introduced by user B become visible to everyone?
Is there a block level scope, or file level, or something
On 09/22/10 11:23, Peter Taps wrote:
Folks,
While going through zpool source code, I see a configuration option called
l2cache. What is this option for? It doesn't seem to be documented.
Thank you in advance for your help.
Regards,
Peter
man zpool
under Cache Devices section
On 9/22/2010 11:15 AM, Markus Kovero wrote:
Such configuration was known to cause deadlocks. Even if it works now (which I
don't expect to be the case) it will make your data to be cached twice. The CPU
utilization will also be much higher, etc.
All in all I strongly recommend against such
On Wed, Sep 22, 2010 at 12:30:58PM -0600, Neil Perrin wrote:
On 09/22/10 11:22, Moazam Raja wrote:
Hi all, I have a ZFS question related to COW and scope.
If user A is reading a file while user B is writing to the same file,
when do the changes introduced by user B become visible to
Actually, the mechanics of local pools inside pools is significantly
different than using remote volumes (potentially exported ZFS volumes)
to build a local pool from.
I don't see how, I'm referring to method where hostA shares local iscsi volume
to hostB where volume is being mirrored
Neil,
Thank you for your help.
However, I don't see anything about l2cache under Cache devices man pages.
To be clear, there are two different vdev types defined in zfs source code -
cache and l2cache. I am familiar with cache devices. I am curious about
l2cache devices.
Regards,
Peter
--
On Wed, Sep 22, 2010 at 20:15, Markus Kovero markus.kov...@nebula.fi wrote:
Such configuration was known to cause deadlocks. Even if it works now (which
I don't expect to be the case) it will make your data to be cached twice.
The CPU utilization will also be much higher, etc.
All in all
On 9/22/10 1:40 PM, Peter Taps wrote:
Neil,
Thank you for your help.
However, I don't see anything about l2cache under Cache devices man pages.
To be clear, there are two different vdev types defined in zfs source code - cache and l2cache.
I am familiar with cache devices. I am curious
Folks,
Here is the list of ZFS enhancements as mentioned for the latest Solaris 10
update:
* ZFS device replacement enhancements - namely autoexpand
* some changes to the zpool list command
* Holding ZFS snapshots
* Triple parity RAID-Z (raidz3)
* The logbias property
*
What options are there to turn off or reduce the priority of a resilver?
This is on a 400TB iSCSI based zpool (8 LUNs per raidz2 vdev, 4 LUNs per
shelf, 6 drives per LUN - 16 shelves total) - my client has gotten to the
point that they just want to get their data off, but this resilver won't
Did you see this thread?
http://opensolaris.org/jive/thread.jspa?messageID=5006
59#500659
I get on this link:
Tomcat http error 500:
The server encountered an internal error () that prevented it from fulfilling
this request.
He had problems with ZFS. It turned out to be faulty
RAM. ZFS
If you write to a zvol on a different host (via iSCSI) those writes
use memory in a different memory pool (on the other computer). No
deadlock.
I would expect in a usual configuration that one side of a mirrored
iSCSI-based pool would be on the same host as it's underlying zvol's
pool.
--
On Sep 23, 2010, at 1:11 AM, Stephan Ferraro wrote:
He had problems with ZFS. It turned out to be faulty
RAM. ZFS is so sensitive it detects and reports
problems to you. No other filesystem does that, so
you think ZFS is problematic and switch. But the
other filesystems is slowly
On Sep 22, 2010, at 1:43 PM, Peter Taps wrote:
Folks,
Here is the list of ZFS enhancements as mentioned for the latest Solaris 10
update:
* ZFS device replacement enhancements - namely autoexpand
* some changes to the zpool list command
* Holding ZFS snapshots
* Triple
On Sep 22, 2010, at 1:46 PM, LIC mesh wrote:
What options are there to turn off or reduce the priority of a resilver?
This is on a 400TB iSCSI based zpool (8 LUNs per raidz2 vdev, 4 LUNs per
shelf, 6 drives per LUN - 16 shelves total) - my client has gotten to the
point that they just
Back by popular demand! The USENIX LISA conference will be hosting a
full day ZFS Tutorial on Monday, November 8, 2010. A lot has changed since
last year's LISA conference and the new, up-to-date tutorial will surely be a
session of extreme gratification.
The conference home page is
On 09/22/10 13:40, Peter Taps wrote:
Neil,
Thank you for your help.
However, I don't see anything about l2cache under Cache devices man pages.
To be clear, there are two different vdev types defined in zfs source code - cache and l2cache.
I am familiar with cache devices. I am curious about
G'day,
My OpenSolaris (b134) box is low on space and has a ZFS mirror for root :
uname -a
SunOS wattage 5.11 snv_134 i86pc i386 i86pc
rpool 696G 639G 56.7G91% 1.09x ONLINE -
It's currently a pair of 750GB drives. In my bag I have a pair of brand
spanking new 2TB seagates that
If you write to a zvol on a different host (via iSCSI) those writes
use memory in a different memory pool (on the other computer). No
deadlock.
I would expect in a usual configuration that one side of a mirrored
iSCSI-based pool would be on the same host as it's underlying zvol's
pool.
28 matches
Mail list logo