If you disable the ZIL, the filesystem still stays correct in RAM, and the
only way you lose any data such as you've described, is to have an
ungraceful power down or reboot.
The advice I would give is: Do zfs autosnapshots frequently (say ... every
5 minutes, keeping the most recent 2 hours of
If you see the workload on the wire go through regular patterns of fast/slow
response
then there are some additional tricks that can be applied to increase the
overall
throughput and smooth the jaggies. But that is fodder for another post...
Can you pls. elaborate on what can be done here as I
If you disable the ZIL, the filesystem still stays correct in RAM, and
the
only way you lose any data such as you've described, is to have an
ungraceful power down or reboot.
The advice I would give is: Do zfs autosnapshots frequently (say ...
every
5 minutes, keeping the most recent 2
Can you elaborate? Just today, we got the replacement drive that has
precisely the right version of firmware and everything. Still, when
we
plugged in that drive, and create simple volume in the storagetek
raid
utility, the new drive is 0.001 Gb smaller than the old drive. I'm
still
If you have an ungraceful shutdown in the middle of writing stuff, while the
ZIL is disabled, then you have corrupt data. Could be files that are
partially written. Could be wrong permissions or attributes on files.
Could be missing files or directories. Or some other problem.
Some changes
If you have an ungraceful shutdown in the middle of writing stuff,
while the
ZIL is disabled, then you have corrupt data. Could be files that are
partially written. Could be wrong permissions or attributes on files.
Could be missing files or directories. Or some other problem.
Some
On Mar 31, 2010, at 7:51 AM, Charles Hedrick wrote:
We're getting the notorious cannot destroy ... dataset already exists. I've
seen a number of reports of this, but none of the reports seem to get any
response. Fortunately this is a backup system, so I can recreate the pool,
but it's
On Wed, 31 Mar 2010, Damon Atkins wrote:
Why do we still need /etc/zfs/zpool.cache file???
The cache file contains a list of pools to import, not a list of pools
that exist. If you do a zpool export foo and then reboot, we don't want
foo to be imported after boot completes.
On 03/31/10 03:50 AM, Damon Atkins wrote:
Why do we still need /etc/zfs/zpool.cache file???
(I could understand it was useful when zfs import was slow)
zpool import is now multi-threaded
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844191), hence a
lot faster, each disk
On 03/31/10 05:11 PM, Brett wrote:
Hi Folks,
Im in a shop thats very resistant to change. The management here are looking for major justification of a move away from ufs to zfs for root file systems. Does anyone know if there are any whitepapers/blogs/discussions extolling the benefits of
On 03/31/10 17:53, Erik Trimble wrote:
Brett wrote:
Hi Folks,
Im in a shop thats very resistant to change. The management here are
looking for major justification of a move away from ufs to zfs for
root file systems. Does anyone know if there are any
whitepapers/blogs/discussions extolling the
This approach does not solve the problem. When you do a snapshot,
the txg is committed. If you wish to reduce the exposure to loss of
sync data and run with ZIL disabled, then you can change the txg commit
interval -- however changing the txg commit interval will not eliminate
the
On 31/03/2010 16:19, Mark J Musante wrote:
On Wed, 31 Mar 2010, Damon Atkins wrote:
Why do we still need /etc/zfs/zpool.cache file???
The cache file contains a list of pools to import, not a list of pools
that exist. If you do a zpool export foo and then reboot, we don't
want foo to be
Is that what sync means in Linux?
A sync write is one in which the application blocks until the OS acks that
the write has been committed to disk. An async write is given to the OS,
and the OS is permitted to buffer the write to disk at its own discretion.
Meaning the async write function
Dude, don't be so arrogant. Acting like you know what I'm talking about
better than I do. Face it that you have something to learn here.
You may say that, but then you post this:
Why do you think that a Snapshot has a better quality than the last
snapshot available?
If you rollback to a
Is that what sync means in Linux?
A sync write is one in which the application blocks until the OS acks that
the write has been committed to disk. An async write is given to the OS,
and the OS is permitted to buffer the write to disk at its own discretion.
Meaning the async write function
This approach does not solve the problem. When you do a snapshot,
the txg is committed. If you wish to reduce the exposure to loss of
sync data and run with ZIL disabled, then you can change the txg commit
interval -- however changing the txg commit interval will not eliminate
the
On Mar 31, 2010, at 11:51 PM, Edward Ned Harvey
solar...@nedharvey.com wrote:
A MegaRAID card with write-back cache? It should also be cheaper than
the F20.
I haven't posted results yet, but I just finished a few weeks of
extensive
benchmarking various configurations. I can say this:
It would be nice for Oracle/Sun to produce a separate
script which reset system/devices back to a install
like beginning so if you move a OS disk with current
password file and software from one system to
another, and have it rebuild the device tree on the
new system.
You mean
On Mar 31, 2010, at 11:58 PM, Edward Ned Harvey
solar...@nedharvey.com wrote:
We ran into something similar with these drives in an X4170 that
turned
out to
be an issue of the preconfigured logical volumes on the drives. Once
we made
sure all of our Sun PCI HBAs where running the exact
On Apr 1, 2010, at 8:42 AM, casper@sun.com wrote:
Is that what sync means in Linux?
A sync write is one in which the application blocks until the OS
acks that
the write has been committed to disk. An async write is given to
the OS,
and the OS is permitted to buffer the write to
On 01/04/2010 14:49, Ross Walker wrote:
We're talking about the sync for NFS exports in Linux; what do they
mean
with sync NFS exports?
See section A1 in the FAQ:
http://nfs.sourceforge.net/
I think B4 is the answer to Casper's question:
BEGIN QUOTE
Linux servers (although not
On Wed, March 31, 2010 21:25, Bart Smaalders wrote:
ZFS root will be the supported root filesystem for Solaris Next; we've
been using it for OpenSolaris for a couple of years.
This is already supported:
Starting in the Solaris 10 10/08 release, you can install and boot from a
ZFS root file
On Thu, Apr 1, 2010 at 9:06 AM, David Magda dma...@ee.ryerson.ca wrote:
On Wed, March 31, 2010 21:25, Bart Smaalders wrote:
ZFS root will be the supported root filesystem for Solaris Next; we've
been using it for OpenSolaris for a couple of years.
This is already supported:
Starting in the
On Mar 31, 2010, at 7:57 PM, Charles Hedrick wrote:
So that eliminates one of my concerns. However the other one is still an
issue. Presumably Solaris Cluster shouldn't import a pool that's still active
on the other system. We'll be looking more carefully into that.
Older releases of
On Apr 1, 2010, at 12:43 AM, tomwaters wrote:
If you see the workload on the wire go through regular patterns of fast/slow
response
then there are some additional tricks that can be applied to increase the
overall
throughput and smooth the jaggies. But that is fodder for another post...
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat
darr...@opensolaris.org wrote:
On 01/04/2010 14:49, Ross Walker wrote:
We're talking about the sync for NFS exports in Linux; what do they
mean
with sync NFS exports?
See section A1 in the FAQ:
http://nfs.sourceforge.net/
I think B4 is
On Thu, 1 Apr 2010, Edward Ned Harvey wrote:
If I'm wrong about this, please explain.
I am envisioning a database, which issues a small sync write, followed by a
larger async write. Since the sync write is small, the OS would prefer to
defer the write and aggregate into a larger block. So
On 01/04/2010 13:01, Edward Ned Harvey wrote:
Is that what sync means in Linux?
A sync write is one in which the application blocks until the OS acks that
the write has been committed to disk. An async write is given to the OS,
and the OS is permitted to buffer the write to disk at its
On 01/04/2010 13:01, Edward Ned Harvey wrote:
Is that what sync means in Linux?
A sync write is one in which the application blocks until the OS acks that
the write has been committed to disk. An async write is given to the OS,
and the OS is permitted to buffer the write to disk at
On Thu, 1 Apr 2010, Edward Ned Harvey wrote:
Dude, don't be so arrogant. Acting like you know what I'm talking about
better than I do. Face it that you have something to learn here.
Geez!
Yes, all the transactions in a transaction group are either committed
entirely to disk, or not at
It does seem like rollback to a snapshot does help here (to assure
that sync async data is consistent), but it certainly does not help
any NFS clients. Only a broken application uses sync writes
sometimes, and async writes at other times.
But doesn't that snapshot possibly have the same
On Thu, 1 Apr 2010, casper@sun.com wrote:
It does seem like rollback to a snapshot does help here (to assure
that sync async data is consistent), but it certainly does not help
any NFS clients. Only a broken application uses sync writes
sometimes, and async writes at other times.
But
On Wed, 31 Mar 2010, Charles Hedrick wrote:
3) This is Solaris Cluster. We tried forcing a failover. The pool
mounted on the other server without dismounting on the first. zpool
list showed it mounted on both machines. zpool iostat showed I/O
actually occurring on both systems.
This is a
You mean /usr/sbin/sys-unconfig?
No, it does not reset a system back far enough.
You still left with the orginal path_to_inst and the device tree.
e.g. take a disk to a different system and the first disk might end up
being sd10 and c15t0d0s0 instead of sd0 and c0 without cleaning up the system
Hoping to hear from someone who has similar equipment:
athlon64 3400+ - abit motherboard (unknown model)
The mobo has 2 built in sata controllers, probably the older 1.5 gb
kind.
And a pci adaptic 1205a (two sata ports [internal], also 1.5 gb)
I want to install a different pci sata controller
You might want to take this issue over to
caiman-disc...@opensolaris.org, because this is more of an
installation/management issue than a zfs issue. Other than providing a
mechanism for updating the zpool.cache file, the actions listed below
are not directly related to zfs.
I believe that
Hi Bruno,
I agree that the raidz2 example on this page is weak and I will provide
a better one.
ZFS is very flexible and can be configured many different ways.
If someone new to ZFS wants to take 3 old (but reliable) disks and make
a raidz2 configuration for testing, we would not consider this
Hi Bruno,
I agree that the raidz2 example on this page is weak and I will provide
a better one.
ZFS is very flexible and can be configured many different ways.
If someone new to ZFS wants to take 3 old (but reliable) disks and make
a raidz2 configuration for testing, we would not consider this
You might want to take this issue over to
caiman-disc...@opensolaris.org, because this is more of an
installation/management issue than a zfs issue. Other than providing a
mechanism for updating the zpool.cache file, the actions listed below
are not directly related to zfs.
I believe that
Hi Marlanne,
I can import a pool that is created with files on a system running the
Solaris 10 10/09 release. See the output below.
This could be a regression from a previous Solaris release, although I
can't reproduce it, but creating a pool with files is not a recommended
practice as
Cindy Swearingen wrote:
If someone new to ZFS wants to take 3 old (but reliable) disks and make
a raidz2 configuration for testing, we would not consider this is a
nonsensical idea. You can then apply what you learn about ZFS space
allocation and redundancy to a new configuration.
Nonsensical
On Thu, Apr 1, 2010 at 11:46 AM, Carson Gaspar car...@taltos.org wrote:
Nonsensical may be a bit strong, but I can see no possible use case where
a 3 disk raidz2 isn't better served by a 3-way mirror.
Once bp_rewrite is done, you'll be able add disks to the raidz2. I suppose
that's one
Brandon High wrote:
On Thu, Apr 1, 2010 at 11:46 AM, Carson Gaspar car...@taltos.org
mailto:car...@taltos.org wrote:
Nonsensical may be a bit strong, but I can see no possible use
case where a 3 disk raidz2 isn't better served by a 3-way mirror.
Once bp_rewrite is done, you'll be
hello
i have had this problem this week. our zil ssd died (apt slc ssd 16gb).
because we had no spare drive in stock, we ignored it.
then we decided to update our nexenta 3 alpha to beta, exported the pool and
made a fresh install to have a clean system and tried to import the pool. we
only
During the IPS upgrade, the file system got full, then I cannot do anything to
recover it.
# df -kl
Filesystem 1K-blocks Used Available Use% Mounted on
rpool/ROOT/opensolaris
4976642 4976642 0 100% /
swap 14217564 244
Hi Casper,
:-)
Leuk te zien dat je straal nog steeds even ver komt :-)
I'm happy to see that it is now the default and I hope this will cause the
Linux NFS client implementation to be faster for conforming NFS servers.
Interesting thing is that apparently defaults on Solaris an Linux are
Jeroen Roodhart wrote:
The thread was started to get insight in behaviour of the F20 as ZIL.
_My_ particular interest would be to be able to answer why perfomance
doesn't seem to scale up when adding vmod-s...
My best guess would be latency. If you are latency bound, adding
additional
On Thu, Apr 1, 2010 at 12:46 PM, Eiji Ota eiji@oracle.com wrote:
# cd /var/adm
# rm messages.?
rm: cannot remove `messages.0': No space left on device
rm: cannot remove `messages.1': No space left on device
I think doing cat /dev/null /var/adm/messages.1 will work.
-B
--
Brandon
It doesn't have to be F20. You could use the Intel
X25 for example.
The mlc-based disks are bound to be too slow (we tested with an OCZ Vertex
Turbo). So you're stuck with the X25-E (which Sun stopped supporting for some
reason). I believe most normal SSDs do have some sort of cache and
On Thu, Apr 1, 2010 at 1:39 PM, Eiji Ota eiji@oracle.com wrote:
Thanks. It worked, but yet the fs says it's full. Is it normal and I can
get some space eventually (if I continue this)?
You may need to destroy some snapshots before the space becomes available.
zfs list -t snapshot will
Is this callstack familiar to anyone? It just happened on a Solaris 10
update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830 unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072 kern.notice] fe8000d1b920
enh == Edward Ned Harvey solar...@nedharvey.com writes:
enh Dude, don't be so arrogant. Acting like you know what I'm
enh talking about better than I do. Face it that you have
enh something to learn here.
funny! AIUI you are wrong and Casper is right.
ZFS recovers to a
la == Lori Alt lori@oracle.com writes:
la I'm only pointing out that eliminating the zpool.cache file
la would not enable root pools to be split. More work is
la required for that.
makes sense. All the same, please do not retaliate against the
bug-opener by adding a
On 01/04/2010 20:58, Jeroen Roodhart wrote:
I'm happy to see that it is now the default and I hope this will cause the
Linux NFS client implementation to be faster for conforming NFS servers.
Interesting thing is that apparently defaults on Solaris an Linux are chosen
such that one
On 01/04/2010 15:24, Richard Elling wrote:
On Mar 31, 2010, at 7:57 PM, Charles Hedrick wrote:
So that eliminates one of my concerns. However the other one is still an issue.
Presumably Solaris Cluster shouldn't import a pool that's still active on the
other system. We'll be looking more
On 01/04/2010 02:01, Charles Hedrick wrote:
So we tried recreating the pool and sending the data again.
1) compression wasn't set on the copy, even though I did sent -R, which is
supposed to send all properties
2) I tried killing to send | receive pipe. Receive couldn't be killed. It hung.
3)
Hi All,
I just got across a strange (well... at least for me) situation with ZFS and I
hope you might be able to help me out. Recently I built a new machine from
scratch for my storage needs which include various CIFS / NFS and most
importantly VMware ESX based operations (in conjunction with
On 04/ 2/10 02:52 PM, Andrej Gortchivkin wrote:
Hi All,
I just got across a strange (well... at least for me) situation with ZFS and I
hope you might be able to help me out. Recently I built a new machine from
scratch for my storage needs which include various CIFS / NFS and most
importantly
On Thu, Mar 25, 2010 at 11:55:29AM -0700, Ray Van Dolson wrote:
On Thu, Mar 25, 2010 at 11:51:25AM -0700, Marion Hakanson wrote:
rvandol...@esri.com said:
We have a Silicon Mechanics server with a SuperMicro X8DT3-F (Rev 1.02)
(onboard LSI 1068E (firmware 1.28.02.00) and a SuperMicro
On Thu, Apr 1 at 19:08, Ray Van Dolson wrote:
Well, haven't yet been able to try the firmware suggestion, but we did
replace the backplane. No change.
I'm not sure the firmware change would do any good either. As it is
now, as long as the SSD drives are attached directly to the LSI
I created the pool by using:
zpool create ZPOOL_SAS_1234 raidz c7t0d0 c7t1d0 c7t2d0 c7t3d0
However now that you mentioned the lack of redundancy I see where is the
problem. I guess it will then remain a mystery how did this happen, since I'm
very careful when engaging the commands and I'm sure
On 04/ 2/10 03:30 PM, Andrej Gortchivkin wrote:
I created the pool by using:
zpool create ZPOOL_SAS_1234 raidz c7t0d0 c7t1d0 c7t2d0 c7t3d0
However now that you mentioned the lack of redundancy I see where is the problem. I guess
it will then remain a mystery how did this happen, since I'm
63 matches
Mail list logo