Hi all,
Recently i got myself a new machine (Dell R710) with 1 internal Dell
SAS/i and 2 sun hba (non-raid) .
From time to time this system just freezes and i noticed that it always
freezes after this message (shown in the /var/adm/messages) :
scsi: [ID 107833 kern.warning] WARNING:
On 05/03/2010 02:52, Jason wrote:
So I tried to do a SAN copy of a (couple of) zpools/zfs volumes today, and I
failed.
Shutdown the box, zoned it to the new storage, finalized the last data sync
from array x to array y, and turned the box on. And the volumes didn't show
up. I issued a
Greeting All
I have create a pool that consists oh a hard disk and a ssd as a cache
zpool create hdd c11t0d0p3
zpool add hdd cache c8t0d0p0 - cache device
I ran an OLTP bench mark to emulate a DMBS
One I ran the benchmark, the pool started create the database file on the
ssd cache device
On Fri, Mar 5, 2010 at 6:46 AM, Abdullah Al-Dahlawi dahl...@ieee.orgwrote:
Greeting All
I have create a pool that consists oh a hard disk and a ssd as a cache
zpool create hdd c11t0d0p3
zpool add hdd cache c8t0d0p0 - cache device
I ran an OLTP bench mark to emulate a DMBS
One I ran
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bruno Sousa
Sent: 5. maaliskuuta 2010 10:34
To: ZFS filesystem discussion list
Subject: [zfs-discuss] snv_133 mpt0 freezing machine
Hi all,
Recently i got myself a
Hi Geovanni
I was monitering the ssd cache using zpool iostat -v like you said. the
cache device within the pool was showing a persistent write IOPS during the
ten (1GB) file creation phase by the benchmark.
The benchmark even gave an insufficient space and terminated which proves
that it was
Mark J Musante wrote:
It looks like you're running into a DTL issue. ZFS believes that ad16p2 has
some data on it that hasn't been copied off yet, and it's not considering the
fact that it's part of a raidz group and ad4p2.
There is a CR on this,
Hi Markus,
Thanks for your input and regarding the broadcom fw i already hitted
that issue and have downgraded it.
However for the Dell Bios i couldn't find anything older than 1.2.6. Do
you have by any chance the url for getting bios 1.1.4 like you say?
Bruno
On 5-3-2010 11:26, Markus Kovero
-Original Message-
From: Bruno Sousa [mailto:bso...@epinfante.com]
Sent: 5. maaliskuuta 2010 13:04
To: Markus Kovero
Cc: ZFS filesystem discussion list
Subject: Re: [zfs-discuss] snv_133 mpt0 freezing machine
Hi Markus,
Thanks for your input and regarding the broadcom fw i already
Victor,
Btw, they affect some files referenced by snapshots as
'zpool status -v' suggests:
tank/DVD:0x9cd tank/d...@2010025100:/Memento.m4v
tank/d...@2010025100:/Payback.m4v
tank/d...@2010025100:/TheManWhoWasntThere.m4v
In case of OpenSolaris it is not that difficult to
On Fri, Mar 5, 2010 at 7:41 AM, Abdullah Al-Dahlawi dahl...@ieee.orgwrote:
Hi Geovanni
I was monitering the ssd cache using zpool iostat -v like you said. the
cache device within the pool was showing a persistent write IOPS during the
ten (1GB) file creation phase by the benchmark.
The
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote:
We have an IMAP e-mail server running on a Solaris 10 10/09 system.
It uses six ZFS filesystems built on a single zpool with 14 daily
snapshots. Every day at 11:56, a cron command destroys the oldest
snapshots and creates new ones,
In this email, when I say PERC, I really mean either a PERC, or any other
hardware WriteBack buffered raid controller with BBU.
For future server purchases, I want to know which is faster: (a) A bunch of
hard disks with PERC and WriteBack enabled, or (b) A bunch of hard disks,
plus one SSD
Edward Ned Harvey wrote:
In this email, when I say PERC, I really mean either a PERC, or any
other hardware WriteBack buffered raid controller with BBU.
For future server purchases, I want to know which is faster: (a) A
bunch of hard disks with PERC and WriteBack enabled, or (b) A bunch of
My full backup script errorred out the last two times I ran it. I've got
a full Bash trace of it, so I know exactly what was done.
There are a moderate number of snapshots on the zp1 pool, and I'm
intending to replicate the whole thing into the backup pool.
After housekeeping, I take make a
Bruno Sousa on Fri, Mar 05, 2010 at 09:34:19AM +0100 wrote:
Hi all,
Recently i got myself a new machine (Dell R710) with 1 internal Dell
SAS/i and 2 sun hba (non-raid) .
From time to time this system just freezes and i noticed that it always
freezes after this message (shown in the
i am attempting to follow the recipe in:
http://blogs.sun.com/sa/entry/hotplugging_sata_drives
the recipe copies the vtoc from the old drive to the new drive and then does an
attach. when i get to the attach - the partition slices on the new drive
overlap (the partition slices on the old drive
Seems like it...and the workaround doesn't help it.
Bruno
On 5-3-2010 16:52, Mark Ogden wrote:
Bruno Sousa on Fri, Mar 05, 2010 at 09:34:19AM +0100 wrote:
Hi all,
Recently i got myself a new machine (Dell R710) with 1 internal Dell
SAS/i and 2 sun hba (non-raid) .
From time to time
Hi David,
I think installgrub is unhappy that no s2 exists on c7t1d0.
I would detach c7t1d0s0 from the pool and follow these steps
to relabel/repartition this disk:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Replacing/Relabeling the Root Pool Disk
Then, reattach
Hi,
I have tried what dedup does on a test dataset that I have filled with 372 GB
of partly redundant data. I have used snv_133. All in all, it was successful.
The net data volume was only 120 GB. Destruction of the dataset finally took a
while, but without any compromise of anything else.
On Fri, Mar 5, 2010 at 10:49 AM, Tonmaus sequoiamo...@gmx.net wrote:
Hi,
I have tried what dedup does on a test dataset that I have filled with 372 GB
of partly redundant data. I have used snv_133. All in all, it was successful.
The net data volume was only 120 GB. Destruction of the
ok - i tried to follow the troubleshooting instructions - but i ran into the
same problem at step 5 - the attach:
init...@dogpatch:~# zpool attach rpool c7t0d0s0 c7t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c7t1d0s0 overlaps with /dev/dsk/c7t1d0s2
and,
ok - -f worked:
init...@dogpatch:~# zpool attach -f rpool c7t0d0s0 c7t1d0s0
Please be sure to invoke installgrub(1M) to make 'c7t1d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
init...@dogpatch:~# zpool status rpool
pool: rpool
state: ONLINE
status: One or more
erik.trim...@sun.com said:
All J4xxx systems are really nothing more than huge SAS expanders hooked to
a bunch of disks, so cache flush requests will either come from ZFS or any
attached controller. Note that I /think/ most non-RAID controllers don't
initiate their own cache flush
Marion - Do you happen to know which SAS hba it applys to?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Original question:
I have a Solaris x86 server running update 6 (Solaris 10 10/08
s10x_u6wos_07b X86). I recently hit this sparse file bug when I deleted a
512GB sparse file from a 1.2TB filesystem and the space was never freed up.
What I am asking is would there be any way to recover the space
bene...@yahoo.com said:
Marion - Do you happen to know which SAS hba it applys to?
Here's the article:
http://sunsolve.sun.com/search/document.do?assetkey=1-66-248487-1
The title is Write-Caching on JBOD SATA Drive is Erroneously Enabled
by Default When Connected to Non-RAID SAS HBAs.
By
Hi,
In your case, there are two other aspects:
- if you pool small devices as JBODS below a vdev
member, no superordinate parity will help you when
you loose a member of the underlying JBOD. The
whole
pool will just be broken, and you will loose a
good
part of your data.
No,
Hi,
so, what would be a critical test size in your opinion? Are there any other
side conditions?
I.e., I am not using any snapshots and have also turned off automatic snapshots
because I was bitten by system hangs while destroying datasets with living
snapshots.
I am also aware that Fishworks
Hello all,
I am using Opensolaris 2009.06 snv_129
I have a quick question. I have created a zpool on a sparse file, for
instance:
zpool create stage c10d0s0
mount it to /media/stage
mkfile -n 500GB /media/stage/disks/disk.img
zpool create zfsStage /media/stage/disks/disk.img
I want to be able to
Hi Greg,
You are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751
Currently, building a pool from files is not fully supported.
Thanks,
Cindy
On 03/05/10 16:15, Gregory Durham wrote:
Hello all,
I am using Opensolaris 2009.06 snv_129
I have a quick
Great...will using lofiadm still cause this issue? either by using mkfile or
by using dd makeing a sparse file? Thanks for the heads up!
On Fri, Mar 5, 2010 at 3:48 PM, Cindy Swearingen
cindy.swearin...@sun.comwrote:
Hi Greg,
You are running into this bug:
On Fri, Mar 5, 2010 at 4:48 PM, Tonmaus sequoiamo...@gmx.net wrote:
Hi,
so, what would be a critical test size in your opinion? Are there any other
side conditions?
when your dedup hash table ( a table that holds a checksum of every block
seen on filesystems/zvols after dedup was enabled)
please post the output of zpool status -v.
Thanks
James Dickens
On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi dahl...@ieee.orgwrote:
Greeting All
I have create a pool that consists oh a hard disk and a ssd as a cache
zpool create hdd c11t0d0p3
zpool add hdd cache c8t0d0p0 -
Tim Cook wrote:
On Fri, Mar 5, 2010 at 8:41 PM, Dan Dascalescu
bigbang7+opensola...@gmail.com
mailto:bigbang7%2bopensola...@gmail.com wrote:
Thanks for your suggestions.
In the meantime I had found this case and PSU - what do folks think?
Antec Twelve Hundred Gaming Case -
Hi, Erik,
I've always wondered what the benefit (and difficulty to add to ZFS) would
be to having an async write cache for ZFS - that is, ZFS currently buffers
async writes in RAM, until it decides to aggregate enough of them to flush
to disk. I think it would be interesting to see what would
[moved off osol-discuss]
Zhu Han wrote:
Hi, Erik,
I've always wondered what the benefit (and difficulty to add to
ZFS) would be to having an async write cache for ZFS - that is,
ZFS currently buffers async writes in RAM, until it decides to
aggregate enough of them to flush to
37 matches
Mail list logo