It's almost certainly the SIL3114 controller.
Google SIL3114 data corruption -- it's nasty.
Jeff
On Thu, Sep 25, 2008 at 07:50:01AM +0200, Mikael Karlsson wrote:
I have a strange problem involving changes in large file on a mirrored
zpool in
Open solaris snv96.
We use it at storage in a
Good question.
Well, the hosts are Netbackup Media servers. The idea behind the design is that
we stream the RMAN stuff to disk, via NFS mounts, and then write to tape during
the day. With the SAN attached disks sitting on these hosts and with disk
storage units configured for NBU the data
Regarding Solaris, or just a generic cold call? Either way it's interesting to
hear that they're making calls. The impression I've had from all the press is
that they've been struggling to meet demand.
--
This message posted from opensolaris.org
___
Hello!
Anyone with experience with the SIL3124 chipset? Does it work good?
It's in the HCL, but since SIL3114 apperantly is totally crap I'm a bit
skeptic to silicon image..
Regards
Mikael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
To keep everyone updated - Thanks to Victor we have recovered AND
repaired all of the data that was lost in the incident. Victor may be
able to explain in detail what he did to accomplish this, I only know
it involved loading a patched zfs kernel module.
I would like to shout a big thanks to
Hi,
I'm building a new ZFS fileserver for our lab and I'd like to have these
features:
- take a snapshot of users' home directories every N minutes (N is 5 or 10)
- remove all old snapshots, keep just these:
- all snapshots made during last H hours (H=24)
- keep one snapshot per day (e.g.
Before re-inventing the wheel, does anyone have any nice shell script to do
this
kind of thing (to be executed from cron)?
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11
___
On Thu, Sep 25, 2008 at 11:43:51AM +0200, Nils Goroll wrote:
Before re-inventing the wheel, does anyone have any nice shell script to do
this
kind of thing (to be executed from cron)?
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
On Thu, 2008-09-25 at 12:07 +0200, [EMAIL PROTECTED] wrote:
On Thu, Sep 25, 2008 at 11:43:51AM +0200, Nils Goroll wrote:
Storage Checkpoints in Veritas software has this feature (removing
the oldest checkpoint in case of 100% filesystem usage) by default.
Why not add such option to ZFS ?
-
Mikael Karlsson [EMAIL PROTECTED] wrote:
Hello!
Anyone with experience with the SIL3124 chipset? Does it work good?
It's in the HCL, but since SIL3114 apperantly is totally crap I'm a bit
skeptic to silicon image..
Yesterday, I tried the SIL 3114 which should be the same. It comes with
a
On Thu, Sep 25, 2008 at 11:30:04AM +0100, Tim Foster wrote:
On Thu, 2008-09-25 at 12:07 +0200, [EMAIL PROTECTED] wrote:
On Thu, Sep 25, 2008 at 11:43:51AM +0200, Nils Goroll wrote:
Storage Checkpoints in Veritas software has this feature (removing
the oldest checkpoint in case of 100%
On Thu, 2008-09-25 at 12:52 +0200, [EMAIL PROTECTED] wrote:
nobody is going to assume user's intentions. Just give us
snapshot-related property which we can set to on/off and everybody
can setup zfs according to his/her needs.
Then that'll be there in nv_100. Enjoy!
cheers,
Tim,
- Frequent snapshots, taken every 15 minutes, keeping the 4 most recent
- Hourly snapshots taken once every hour, keeping 24
- Daily snapshots taken once every 24 hours, keeping 7
- Weekly snapshots taken once every 7 days, keeping 4
- Monthly snapshots taken on the first day of
Daniel Rock [EMAIL PROTECTED] wrote:
Mikael Karlsson schrieb:
Hello!
Anyone with experience with the SIL3124 chipset? Does it work good?
It's in the HCL, but since SIL3114 apperantly is totally crap I'm a bit
skeptic to silicon image..
I'm running with two SIL3132 PCIe cards in
Joerg Schilling schrieb:
If it works for your system, be happy. I mentioned that the controller may
not be usable in all systems as it hangs up the BIOS in my machine if there
is a
disk connected to the card.
I disabled the BIOS on my cards because I don't need it. I boot from one
of the
Daniel Rock [EMAIL PROTECTED] wrote:
Joerg Schilling schrieb:
If it works for your system, be happy. I mentioned that the controller may
not be usable in all systems as it hangs up the BIOS in my machine if there
is a
disk connected to the card.
I disabled the BIOS on my cards
Joerg Schilling schrieb:
Daniel Rock [EMAIL PROTECTED] wrote:
I disabled the BIOS on my cards because I don't need it. I boot from one
of the onboard SATA ports.
OK, how did you do this?
This will depend on the card you are using. I simply had to remove a
jumper (Dawicontrol DC-300e).
Almost exactly what I was planning to configure here Nils, with a couple of
minor changes. I was planning on taking 10 weekly backups since you
occasionally get 5 week months, and depending on storage capacity, we're also
considering annual snapshots.
I quite like Tim's idea of having 31
Daniel Rock [EMAIL PROTECTED] wrote:
Joerg Schilling schrieb:
Daniel Rock [EMAIL PROTECTED] wrote:
I disabled the BIOS on my cards because I don't need it. I boot from one
of the onboard SATA ports.
OK, how did you do this?
This will depend on the card you are using. I simply had
On Thu, Sep 25, 2008 at 12:51:57PM +0200, Joerg Schilling wrote:
Mikael Karlsson [EMAIL PROTECTED] wrote:
Anyone with experience with the SIL3124 chipset? Does it work good?
Yesterday, I tried the SIL 3114 which should be the same. It comes with
a BIOS that hangs up completely directly
On 25 Sep 2008, at 14:40, Ross wrote:
For a default setup, I would have thought a years worth of data
would be enough, something like:
Given that this can presumably be configured to suit everyone's
particular data retention plan, for a default setup, what was
originally proposed seems
Brian Hechinger [EMAIL PROTECTED] wrote:
On Thu, Sep 25, 2008 at 12:51:57PM +0200, Joerg Schilling wrote:
Mikael Karlsson [EMAIL PROTECTED] wrote:
Anyone with experience with the SIL3124 chipset? Does it work good?
Yesterday, I tried the SIL 3114 which should be the same. It comes
Kyle McDonald wrote:
Darren J Moffat wrote:
John Cecere wrote:
The man page for dumpadm says this:
A given ZFS volume cannot be configured for both the swap area and the dump
device.
And indeed when I try to use a zvol as both, I get:
zvol cannot be used as a swap device
On Thu, Sep 25, 2008 at 04:23:42PM +0200, Joerg Schilling wrote:
THe lock I observed happened inside the BIOS of the card after the main board
BIOS jumped into the board BIOS. This was before any bootloader has been
ionvolved.
I wonder if it's not nessesarily the BIOS of the card, but
I asked Tim something like that when he posted his last update and from his
reply it looks like something is in the works:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11
He was also blogging about this stuff in 2006 :)
http://blogs.sun.com/timf/entry/zfs_on_your_desktop
I've also
[EMAIL PROTECTED] wrote on 09/25/2008 05:30:04 AM:
On Thu, 2008-09-25 at 12:07 +0200, [EMAIL PROTECTED] wrote:
On Thu, Sep 25, 2008 at 11:43:51AM +0200, Nils Goroll wrote:
Storage Checkpoints in Veritas software has this feature (removing
the oldest checkpoint in case of 100% filesystem
[EMAIL PROTECTED] wrote on 09/25/2008 09:16:48 AM:
On 25 Sep 2008, at 14:40, Ross wrote:
For a default setup, I would have thought a years worth of data
would be enough, something like:
Given that this can presumably be configured to suit everyone's
particular data retention plan, for a
On Thu, 2008-09-25 at 10:19 -0500, [EMAIL PROTECTED] wrote:
That snap schedule seems reasonable to me. Relate to the cleanup part
of the doc linked, do you know the rational for killing off the most recent
(15 minute and hourly) snaps vs the oldest (monthly) first?
It's a tough call
[EMAIL PROTECTED] wrote on 09/25/2008 10:34:41 AM:
On Thu, 2008-09-25 at 10:19 -0500, [EMAIL PROTECTED] wrote:
That snap schedule seems reasonable to me. Relate to the cleanup
part
of the doc linked, do you know the rational for killing off the most
recent
(15 minute and hourly)
Wade,
that order. Also I guess user case in my mind would leave a desktop user
more likely to need access to a few minutes, hours or days ago then 12
months ago.
You are guessing that, but I am a desktop user who'd rather like the contrary.
I think Tim has already stated that he would not
Hi Detlef,
I have no problems with build 98.
But the question about how many snapshots you have is a good one... I just
booted a thumper with thousands of snapshots and it took me over an hour.
Don't you have snapshots of the boot environment?
Kind regards
Lars
--
This message posted from
Richard Elling wrote:
Would there be no performance benefits from having swap read/write
from contiguous preallocated space also?
Not really, if you have to swap, you have no performance. Period.
You lose about 3 orders of magnitude of memory latency by going to
disk. If one is 2x
Hi Wade,
We considered a number of approaches including just deleting oldest snapshots
first
and progressing through to the newest snapshots.
When you consider the default snapshot schedules we are going to use, the
model is that snapshots get thinned out over time. So in situations were disk
Jonathan Hogg wrote:
It would be great if there was some way to know if a snapshot contains
blocks for a particular file, i.e., that snapshot contains an earlier
version of the file than the next snapshot / now. If you could do that
and make ls support it with an additional flag/column,
On 09/24/08 10:57 PM, Jeff Bonwick wrote:
It's almost certainly the SIL3114 controller.
Google SIL3114 data corruption -- it's nasty.
I've also in the past had the misfortune of experiencing
Silicon Image. My corruption was with other file types
and not even ZFS. Silicon Image is
THe lock I observed happened inside the BIOS of the card after the main board
BIOS jumped into the board BIOS. This was before any bootloader has been
ionvolved.
Is there a disk using a zpool with an EFI disk label? Here's a link to an old
thread about systems hanging in BIOS POST when they
On 25 Sep 2008, at 17:14, Darren J Moffat wrote:
Chris Gerhard has a zfs_versions script that might help:
http://blogs.sun.com/chrisg/entry/that_there_is
Ah. Cool. I will have to try this out.
Jonathan
___
zfs-discuss mailing list
Jürgen Keil [EMAIL PROTECTED] wrote:
THe lock I observed happened inside the BIOS of the card after the main
board
BIOS jumped into the board BIOS. This was before any bootloader has been
ionvolved.
Is there a disk using a zpool with an EFI disk label? Here's a link to an old
thread
np == Neal Pollack [EMAIL PROTECTED] writes:
np No attempt to acknowledge or recall defective silicon. No
np interest in customer data loss. Well, this customer has no
np further interest in Silicon Image. I refuse to acknowledge
np that they exist.
1. too bad Sil is the only
Brian Hechinger [EMAIL PROTECTED] wrote:
On Thu, Sep 25, 2008 at 04:23:42PM +0200, Joerg Schilling wrote:
THe lock I observed happened inside the BIOS of the card after the main
board
BIOS jumped into the board BIOS. This was before any bootloader has been
ionvolved.
I wonder if
On Thu, Sep 25, 2008 at 13:38, Miles Nordin [EMAIL PROTECTED] wrote:
1. too bad Sil is the only ones selling chips on PCI cards that have
source code for their drivers.
Indeed, it is too bad. But I'd rather have a working closed blob than
a driver that is Free Software for a device that is
mk == Mikael Karlsson [EMAIL PROTECTED] writes:
mk Anyone with experience with the SIL3124 chipset? Does it work
mk good?
In Solaris, I believe Sil3124 has a SATA framework driver while
SIL3114 is the old IDE framework.
There is more than one version of the 3124, but I've not heard
js == Joerg Schilling [EMAIL PROTECTED] writes:
js If it works for your system, be happy. I mentioned that the
js controller may not be usable in all systems as it hangs up the
js BIOS in my machine if there is a disk connected to the card.
There are three different chips under
On Mon, Sep 22, 2008 at 3:59 PM, Detlef [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
With Nevada Build 98 I realize a slow zpool import of my pool which
holds my user and archive data on my laptop.
The first time it was realized during the boot if Solaris tells me to
mount zfs filesystems
tf == Tim Foster [EMAIL PROTECTED] writes:
tf anyone else have an opinion?
keep the number of snapshots small until the performacne problems with
booting/importing/scrubbing while having lots of snapshots are
resolved.
pgp4Qi9Dyk7O4.pgp
Description: PGP signature
Richard Elling wrote:
Tim Haley wrote:
Vincent Fox wrote:
Just make SURE the other host is actually truly DEAD!
If for some reason it's simply wedged, or you have lost console
access but the hostA is still live, then you can end up with 2
systems having access to same ZFS pool.
I
On Thu, Sep 25, 2008 at 07:40:09PM +0200, Joerg Schilling wrote:
There is no option to use the card+disk in a differnt machine.
That's a shame, that could tell you a lot.
I read that there might be a firmware upgrade but I have not been able to
find
a dowload.
wm == Will Murnane [EMAIL PROTECTED] writes:
wm I'd rather have a working closed blob than a driver that is
wm Free Software for a device that is faulty. Ideals are very
wm nice, but broken hardware isn't.
except,
1. part of the reason the closed Solaris drivers are (also)
Miles Nordin wrote:
...
On Solaris there is one closed driver (LSI) and one open driver
(AHCI) that works sort-of well but not as well as Linux, and
doesn't support advanced features. The open driver isn't
obtainable as an add-on card and doesn't support port multipliers,
c 2. if the .vmdk's were stored in ZFS why was the corruption not
c flagged as a CKSUM error?
wm They were. From the OP:
NAMESTATE READ WRITE CKSUM
testing ONLINE 0 016
mirrorONLINE 0 016
This is on X86 box running solaris10
I created RAIDZ using this command. These are new disks previously under
hardware raid control on X4140
zpool create -f -m /export/content export_content raidz c0t2d0 c0t3d0 c0t4d0
c0t5d0 c0t6d0 c0t7d0
Here is the output
ech3-mes02.prod:schadala[563] ~ $
jcm == James C McPherson [EMAIL PROTECTED] writes:
jcm I assume you're referring to mpt(7d) here?
jcm Since we started shipping it at all, with Solaris _8_, it's
jcm definitely been available in Solaris 10.
no, I was mistaken then.
My perhaps mistaken understanding though was that
I'm not sure if this is the right forum as we are running Solaris 10 (not
OpenSolaris). We do have all the latest patches.
I am trying to get a better understanding of where our memory is going. The
server is a T2000 with 8 GB RAM. I understand that the ARC is a major consumer
but it is only
But to be honest I don't wish for a driver for every chip---I'm
not trying to ``convert'' machines, I buy them specifically for
the task. I just want an open driver that works well for some
fairly-priced card I can actually buy. I'm willing to fight the
OEM problem:
Miles Nordin wrote:
jcm == James C McPherson [EMAIL PROTECTED] writes:
jcm I assume you're referring to mpt(7d) here?
jcm Since we started shipping it at all, with Solaris _8_, it's
jcm definitely been available in Solaris 10.
no, I was mistaken then.
My perhaps mistaken
55 matches
Mail list logo