I would like to go back to my question for a second:
I checked with my Nexsan supplier and they confirmed that access to
every single disk in SATABeast is not possible. The smallest entities
I can create on the SATABeast are RAID 0 or 1 arrays. With RAID 1 I'll
loose too much disk space
Hi,
there was recently a bug reported against EXT4 that gets triggered by
KDE: https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781
Now I'd like to verify that my understanding of ZFS behavior and
implementations is correct, and ZFS is unaffected from this kind of
issue. Maybe
The underlying problem with ext4 is that some kde executables do
something like this:
1a) open and read data from file x, close file x
1b) open and truncate file x
1c) write data to file x
1d) close file x
or
2a) open and read data from file x, close file x
2b) open and truncate file x.new
2c)
Thomas Maier-Komor tho...@maier-komor.de wrote:
there was recently a bug reported against EXT4 that gets triggered by
KDE: https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781
Now I'd like to verify that my understanding of ZFS behavior and
implementations is correct, and ZFS is
I have a H8DM8-2 motherboard with a pair of AOC-SAT2-MV8 SATA
controller cards in a 16-disk Supermicro chassis.
I'm running OpenSolaris 2008.11, and the machine performs very well
unless I start to copy a large amount of data to the ZFS (software
raid) array that's on the Supermicro SATA
Hello,
I want to setup an opensolaris for centralized storage server, using
ZFS as the underlying FS, on a RAID 10 SATA disks.
I will export the storage blocks using ISCSI to RHEL 5 (less than 10
clients, and I will format the partition as EXT3)
I want to ask...
1. Is this setup suitable for
howard chen wrote:
Hello,
I want to setup an opensolaris for centralized storage server, using
ZFS as the underlying FS, on a RAID 10 SATA disks.
I will export the storage blocks using ISCSI to RHEL 5 (less than 10
clients, and I will format the partition as EXT3)
I want to ask...
1. Is this
The copy operation will make all the disks start seeking at the same time and
will make your CPU activity jump to a significant percentage to compute the
ZFS checksum and RAIDZ parity. I think you could be overloading your PSU
because of the sudden increase in power consumption...
However if
I'm working on testing this some more by doing a savecore -L right
after I start the copy.
BTW, I'm copying to a raidz2 of only 5 disks, not 16 (the chassis
supports 16, but isn't fully populated).
So far as I know, there is no spinup happening - these are not RAID
controllers, just dumb SATA
Okay so for some reason my Opensolaris 106 installation quit booting
properly (completely seperate problem). At this point I was still able
to boot into safe mode, but not into a normal desktop session.
First a disk failed, I replaced this disk with another. The array
completely resilvered. I was
Lars-Gunnar Persson wrote:
I would like to go back to my question for a second:
I checked with my Nexsan supplier and they confirmed that access to
every single disk in SATABeast is not possible. The smallest entities
I can create on the SATABeast are RAID 0 or 1 arrays. With RAID 1 I'll
Hello,
On Wed, Mar 11, 2009 at 10:20 PM, Darren J Moffat
darr...@opensolaris.org wrote:
1. Is this setup suitable for mission critical use now?
Yes, why wouldn't it be ?
Because I just wonder why some other people are using zfs/fuse on Linux, e.g.
howard chen wrote:
Hello,
On Wed, Mar 11, 2009 at 10:20 PM, Darren J Moffat
darr...@opensolaris.org wrote:
1. Is this setup suitable for mission critical use now?
Yes, why wouldn't it be ?
Because I just wonder why some other people are using zfs/fuse on Linux, e.g.
I blogged this a while ago:
http://blog.clockworm.com/2007/10/connecting-linux-centos-5-to-solaris.html
On Wed, Mar 11, 2009 at 1:02 PM, howard chen howac...@gmail.com wrote:
Hello,
On Wed, Mar 11, 2009 at 10:20 PM, Darren J Moffat
darr...@opensolaris.org wrote:
1. Is this setup suitable
Something is not right in the IO space. The messages talk about
vendor ID = 11AB
0x11AB Marvell Semiconductor
TMC Research
Vendor Id: 0x1030
Short Name: TMC
Does fmdump -eV give any clue when the box comes back up?
..Remco
Blake wrote:
I'm attaching a screenshot of the console just
fmdump is not helping much:
r...@host:~# fmdump -eV
TIME CLASS
fmdump: /var/fm/fmd/errlog is empty
comparing that screenshot to the output of cfgadm is interesting -
looks like the controller(s):
r...@host:~# cfgadm -v
Ap_Id Receptacle
I think that TMC Research is the company that designed the
Supermicro-branded controller card that has the Marvell SATA
controller chip on it. Googling around I see connections between
Supermicro and TMC.
This is the card:
http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm
Could the problem be related to this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6793353
I'm testing setting the maximum payload size as a workaround, as noted
in the bug notes.
On Wed, Mar 11, 2009 at 3:14 PM, Blake blake.ir...@gmail.com wrote:
I think that TMC Research
looks worth a go otherwise:
if the boot disk is also off that controller it may be too hosed to
write anything to the boot disk hence FMA doesn't see any issue when it
comes up. Possible further actions:
- Upgrade FW of controller to highest or known working level
- Upgrade driver or OS
Any chance this could be the motherboard? I suspect the controller.
The boot disks are on the built-in nVidia controller.
On Wed, Mar 11, 2009 at 3:41 PM, Remco Lengers re...@lengers.com wrote:
- Upgrade FW of controller to highest or known working level
I think I have the latest controller
Hi All,
I'm new on ZFS, so I hope this isn't too basic a question. I have a host where
I setup ZFS. The Oracle DBAs did their thing and I know have a number of ZFS
datasets with their respective clones and snapshots on serverA. I want to
export some of the clones to serverB. Do I need to
I'm not 100% sure what your question here is, but let me give you a
(hopefully) complete answer:
(1) ZFS is NOT a clustered file system, in the sense that it is NOT
possible for two hosts to have the same LUN mounted at the same time,
even if both are hooked to a SAN and can normally see that
Hi Eric,
Thanks for the quick response. Then on hostB, the new LUN will need the same
amount of disk space for the pool, as on hostA, if I'm understanding you
correctly. Correct? Thanks!
- Original Message
From: Erik Trimble erik.trim...@sun.com
To: Grant Lowe
Erik Trimble wrote:
I'm not 100% sure what your question here is, but let me give you a
(hopefully) complete answer:
(1) ZFS is NOT a clustered file system, in the sense that it is NOT
possible for two hosts to have the same LUN mounted at the same time,
even if both are hooked to a SAN and can
Blake wrote:
I'm attaching a screenshot of the console just before reboot. The
dump doesn't seem to be working, or savecore isn't working.
On Wed, Mar 11, 2009 at 11:33 AM, Blake blake.ir...@gmail.com wrote:
I'm working on testing this some more by doing a savecore -L right
after I start
On Wed, 2009-03-11 at 13:50 -0700, Grant Lowe wrote:
Hi Eric,
Thanks for the quick response. Then on hostB, the new LUN will need the same
amount of disk space for the pool, as on hostA, if I'm understanding you
correctly. Correct? Thanks!
I'm assuming you're referring to my second
I guess I didn't make it clear that I had already tried using savecore
to retrieve the core from the dump device.
I added a larger zvol for dump, to make sure that I wasn't running out
of space on the dump device:
r...@host:~# dumpadm
Dump content: kernel pages
Dump device:
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
It's hard to use the HAL sometimes.
I am trying to locate chipset info but having a hard time...
I'm doing a little testing and I hit a strange point. Here is a zvol (clone)
pool1/volclone type volume -
pool1/volclone origin pool1/v...@diff1 -
pool1/volclone reservation none default
pool1/volclone volsize
Looks like I may have slightly answered my question.
I found that it uses the server board S5000PSL, which is in the HCL:
http://www.sun.com/bigadmin/hcl/data/systems/details/2944.html
It's not listed in the OpenSolaris one, only the Solaris one. I would
think something this old (dated 2007)
Hi Eric,
Thanks. That scenario makes sense. I have a better of how to set things up
now. It's a three-step process, which I didn't realize.
grant
- Original Message
From: Erik Trimble erik.trim...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent:
Hey Richard,
That explanation help clarify things. I have another question, but maybe it'll
be a new topic. Basically I would like to export some stuff of vxfs file
system on a different host, and import to zfs. Is there a way to do that?
grant
- Original Message
From: Richard
On Wed, 11 Mar 2009 17:55:23 -0700
mike mike...@gmail.com wrote:
Looks like I may have slightly answered my question.
I found that it uses the server board S5000PSL, which is in the HCL:
http://www.sun.com/bigadmin/hcl/data/systems/details/2944.html
It's not listed in the OpenSolaris one,
On Mar 11, 2009, at 20:14, mike wrote:
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
It's hard to use the HAL sometimes.
I am trying to locate chipset info but having a hard time...
doesnt it require java and x11?
On Wed, Mar 11, 2009 at 6:53 PM, David Magda dma...@ee.ryerson.ca wrote:
On Mar 11, 2009, at 20:14, mike wrote:
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
On Mar 11, 2009, at 21:59, mike wrote:
On Wed, Mar 11, 2009 at 6:53 PM, David Magda dma...@ee.ryerson.ca
wrote:
If you know someone who already has the hardware, you can ask them
to run
the Sun Device Detection Tool:
http://www.sun.com/bigadmin/hcl/hcts/device_detect.jsp
It runs
In the style of a discussion over a beverage, and talking about
user-quotas on ZFS, I recently pondered a design for implementing user
quotas on ZFS after having far too little sleep.
It is probably nothing new, but I would be curious what you experts
think of the feasibility of
If you're having issues with a disk contoller or disk IO driver its highly
likely that a savecore to disk after the panic will fail. I'm not sure how to
work around this, maybe a dedicated dump device not on a controller that uses a
different driver then the one that you're having issues with?
Brian H. Nelson wrote:
I'm doing a little testing and I hit a strange point. Here is a zvol
(clone)
pool1/volclone type volume -
pool1/volclone origin pool1/v...@diff1 -
pool1/volclone reservation none default
My dump device is already on a different controller - the motherboards
built-in nVidia SATA controller.
The raidz2 vdev is the one I'm having trouble with (copying the same
files to the mirrored rpool on the nVidia controller work nicely). I
do notice that, when using cp to copy the files to the
Hm -
Crashes, or hangs? Moreover - how do you know a CPU is pegged?
Seems like we could do a little more discovery on what the actual
problem here is, as I can read it about 4 different ways.
By this last piece of information, I'm guessing the system does not
crash, but goes really really
41 matches
Mail list logo