After around four days the process appeared to have stalled (no
audible hard drive activity). I restarted with milestone=none; deleted
/etc/zfs/zpool.cache, restarted, and went zpool import tank. (also
allowed root login to ssh, so I could make new ssh sessions if
required.) Now I can watch the pro
The problem has been resolved by Victor.
Thank you again for your time and effort yesterday.
I don't think I would have ever been able to get my data back without your
level of expertise and hands-on approach.
As discussed last night, the important data has been backed up already and come
Monda
On Feb 13, 2010, at 10:54 AM, Edward Ned Harvey wrote:
> > Please add some raidz3 tests :-) We have little data on how raidz3
> > performs.
>
> Does this require a specific version of OS? I'm on Solaris 10 10/09, and
> "man zpool" doesn't seem to say anything about raidz3 ... I haven't tried
Don't use raidz for the raid type - go with a striped set
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So, one of the tricks I've used in the past is to assign a volname in format as
I use luns. Dunno if that's an option with ASM? ZFS seems to blow those away,
the last time I looked.
-A
On Feb 13, 2010, at 14:32 , Jason King wrote:
> My problem is when you have 100+ luns divided between OS an
My problem is when you have 100+ luns divided between OS and DB,
keeping track of what's for what can become problematic. It becomes
even worse when you start adding luns -- the chance of accidentally
grabbing a DB lun instead of one of the new ones is non-trivial (then
there's also the chance th
On Sat, 13 Feb 2010, Edward Ned Harvey wrote:
> kind as to collect samples of "iosnoop -Da" I would be eternally
> grateful :-)
I'm guessing iosnoop is an opensolaris thing? Is there an equivalent for
solaris?
Iosnoop is part of the DTrace Toolkit by Brendan Gregg, which does
work on Sol
> There is of course the caveat of using raw devices with databases (it
> becomes harder to track usage, especially as the number of LUNs
> increases, slightly less visibility into their usage statistics at the
> OS level ). However perhaps now someone can implement the CR I filed
> a long time
On Sat, Feb 13, 2010 at 9:58 AM, Jim Mauro wrote:
> Using ZFS for Oracle can be configured to deliver very good performance.
> Depending on what your priorities are in terms of critical metrics, keep in
> mind
> that the most performant solution is to use Oracle ASM on raw disk devices.
> That is
> IMHO, sequential tests are a waste of time. With default configs, it
> will be
> difficult to separate the "raw" performance from prefetched
> performance.
> You might try disabling prefetch as an option.
Let me clarify:
Iozone does a nonsequential series of sequential tests, specifi
We recently patched our X4500 from Sol10 U6 to Sol10 U8 and have not noticed
anything like what you're seeing. We do not have any SSD devices installed.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have a similar situation. I have a system that is used for backup copies of
logs and other non-critical things, where the primary copy is on a Netapp. Data
gets written in batches a few times a day. We use this system because storage
on it is a lot less expensive than on the Netapp. It's only
one shows pool size, one shows filesystem size.
the pool size is based on raw space.
the zfs list size shows how much is used and how much usable space is
ableable.
for instance, i use raidz2 with 1tb drives so if i do zpool list i see ALL
the space, including parity, but if i do zfs list i onl
I have the following pool:
NAME SIZE USED AVAILCAP HEALTH ALTROOT
OIRT 6.31T 3.72T 2.59T58% ONLINE /
"zfs list" shows the following for a typical file system:
NAMEUSED AVAIL REFER MOUNTPOINT
OIRT/sakai/production 1.40T 1.77T 1.40T /OIRT/sakai/produc
On Sat, 13 Feb 2010, Bob Friesenhahn wrote:
Make sure to also test with a command like
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
since it creates a 4GB test file for each thread, with 8 threads.
Bob
--
On Sat, 13 Feb 2010, Edward Ned Harvey wrote:
Will test, including the time to flush(), various record sizes inside file
sizes up to 16G,
sequential write and sequential read. Not doing any mixed read/write
requests. Not doing any
random read/write.
iozone -Reab somefile.wks -g 17G -i 1 -i
Using ZFS for Oracle can be configured to deliver very good performance.
Depending on what your priorities are in terms of critical metrics, keep
in mind
that the most performant solution is to use Oracle ASM on raw disk devices.
That is not intended to imply anything negative about ZFS or UFS.
Some thoughts below...
On Feb 13, 2010, at 6:06 AM, Edward Ned Harvey wrote:
> I have a new server, with 7 disks in it. I am performing benchmarks on it
> before putting it into production, to substantiate claims I make, like
> “striping mirrors is faster than raidz” and so on. Would anybody
On Feb 13, 2010, at 5:23 AM, Tony MacDoodle wrote:
> Was wondering if anyone has had any performance issues with Oracle running on
> ZFS as compared to UFS?
The ZFS for Databases wiki is the place to collect information and advice
for database on ZFS.
http://www.solarisinternals.com/wiki/index
comment below...
On Feb 12, 2010, at 2:25 PM, TMB wrote:
> I have a similar question, I put together a cheapo RAID with four 1TB WD
> Black (7200) SATAs, in a 3TB RAIDZ1, and I added a 64GB OCZ Vertex SSD, with
> slice 0 (5GB) for ZIL and the rest of the SSD for cache:
> # zpool status dpool
>
Mark J Musante wrote:
On Thu, 11 Feb 2010, Cindy Swearingen wrote:
On 02/11/10 04:01, Marc Friesacher wrote:
fr...@vault:~# zpool import
pool: zedpool
id: 10232199590840258590
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zedpool
I have a new server, with 7 disks in it. I am performing benchmarks on it
before putting it into production, to substantiate claims I make, like
"striping mirrors is faster than raidz" and so on. Would anybody like me to
test any particular configuration? Unfortunately I don't have any SSD, so I
Was wondering if anyone has had any performance issues with Oracle running
on ZFS as compared to UFS?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I just have the say this, and I don't mean it in a bad way... If you
really care about your data why then use usb drives with lose cables and
(apparently no backup)
USB connected drives for data backup are okay, for playing around and
getting to know ZFS seems also okay. Using it for onlin
I had a very similar problem. 8 external USB drives running OpenSolaris
native. When I moved the machine into a different room and powered it back
up (there were a couple of reboots and a couple of broken usb cables and
drive shut downs in between), I got the same error. Loosing that much data
is d
Thanks Brendan,
I was going to move it over to 8kb block size once I got through this index
rebuild. My thinking was that a disproportionate block size would show up as
excessive IO thruput, not a lack of thruput.
The question about the cache comes from the fact that the 18GB or so that it
says is
26 matches
Mail list logo