Hello all,
Did you make a install on the USB stick, or did you
use the Distribution Constructor (DC)?
Leal.
I did the both of:
(1) install os0805 (build 86) LiveCD on a USB stick, boot from the USB stick,
image-update to build 95, then reboot (cold or warm) failed.
(2) Install
Bob Friesenhahn schrieb:
On Tue, 21 Oct 2008, Håvard Krüger wrote:
Is it possible to build a RaidZ with 3x 1TB disks and 5x 0.5TB disks,
and then swap out the 0.5 TB disks as time goes by? Is there a
documentation/wiki on doing this?
Yes, you can build a raidz vdev with all of these
Bill Sommerfeld wrote:
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
I've a report that the mismatch between SQLite3's default block size and
ZFS' causes some performance problems for Thunderbird users.
I was seeing a severe performance problem with sqlite3 databases as used
by
Fascinating link, thanks for posting it.
I was writing a nice long reply about what this means for usage, but I've just
been on the Fusion-io web site, and it appears they have updated their
documentation. They now state:
Write MB/s: 600
Read MB/s: 700
Read IOPS: 102,000 (sustained 4k
On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote:
I'm assuming this is local filesystem rather than ZFS backed NFS (which
is what I have).
Correct, on a laptop.
What has setting the 32KB recordsize done for the rest of your home
dir, or did you give the evolution directory its own
On 21 October, 2008 - Dave Bevans sent me these 11K bytes:
Hi,
I have a customer with the following question...
She's trying to combine 2 ZFS 460gb disks into one 900gb ZFS disk. If
this is possible how is this done? Is there any documentation on this
that I can provide to them?
You
On Wed, 22 Oct 2008, Thomas Maier-Komor wrote:
But in this case one should be aware that if one adds another vdev, it
is currently impossible to get rid of it afterwards. I.e. the pool will
always have to RaidZ vdefs, and the new vdev which would consist in this
scenario of 3 1T disks
Hi,
I have a triple boot amd64 Linux/FreeBSD/OpenSolaris box used for Q/A. It is
in a data center where I don't have easy physical access to the machine. It
was working fine for months, now I see this at boot time on the serial console:
SunOS Release 5.11 Version snv_86 64-bit
Copyright
Reboot to the grub menu
Move to the failsafe kernel entry
tap e to edit entry.
go to the kernel entry and tap e again
Append -kv to the end of the line
Accept and tap b to boot the line.
After some output you will be prompted to mount the root pool on /a - Enter
y to accept.
You will then get a
On Wed, Oct 22, 2008 at 2:35 AM, Dave Bevans [EMAIL PROTECTED] wrote:
Hi,
I have a customer with the following question...
She's trying to combine 2 ZFS 460gb disks into one 900gb ZFS disk. If this
is possible how is this done? Is there any documentation on this that I can
provide to
As jritorto is noting, I think the issue here is whether the fix has been
backported to Solaris 10 5/08 or 10/08. It's a nasty problem to run into on a
production machine. In my case, I'm restoring from tape because my pool went
corrupt waiting for resilvers to finish which were getting
did you follow the instructions for updating grub after the image-update:
http://opensolaris.org/jive/thread.jspa?messageID=277115tstart=0
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Johan Hartzenberg wrote:
Reboot to the grub menu
Move to the failsafe kernel entry
Ugh. This is OpenSolaris (Indiana), and there *is* no failsafe
as far as I can tell. There is one grub entry for Solaris:
#-- ADDED BY BOOTADM - DO NOT EDIT --
title OpenSolaris 2008.05
On 10/22/08 09:02 AM, Andrew Gallatin wrote:
Johan Hartzenberg wrote:
Reboot to the grub menu
Move to the failsafe kernel entry
Ugh. This is OpenSolaris (Indiana), and there *is* no failsafe
as far as I can tell. There is one grub entry for Solaris:
#-- ADDED BY BOOTADM - DO
Neal Pollack wrote:
Simple, the equiv of failsafe for OpenSolaris is to boot the live-cd,
then manually mount your disk drive.
Yuck. The lack of a failsafe is a *huge* step backwards, considering how
fragile the ZFS root seems to be. The idea of having to have somebody on-site
at a
Hi,
On a busy NFS server, performance tends to be very modest for large amounts
of small files due to the well known effects of ZFS and ZIL honoring the
NFS COMMIT operation[1].
For the mature sysadmin who knows what (s)he does, there are three
possibilities:
1. Live with it. Hard, if you see
On 10/22/08 10:26, Constantin Gonzalez wrote:
Hi,
On a busy NFS server, performance tends to be very modest for large amounts
of small files due to the well known effects of ZFS and ZIL honoring the
NFS COMMIT operation[1].
For the mature sysadmin who knows what (s)he does, there are
Leave the default recordsize. With 128K recordsize,
files smaller than
If I turn zfs compression on, does the recordsize influence the compressratio
in anyway?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Wed, 22 Oct 2008, Neil Perrin wrote:
On 10/22/08 10:26, Constantin Gonzalez wrote:
3. Disable ZIL[1]. This is of course evil, but one customer pointed out to me
that if a tar xvf were writing locally to a ZFS file system, the writes
wouldn't be synchronous either, so there's no
On Wed, 22 Oct 2008, Mika Borner wrote:
Leave the default recordsize. With 128K recordsize,
files smaller than
If I turn zfs compression on, does the recordsize influence the
compressratio in anyway?
Yes, I believe so. ZFS is not going to try to compress a chunk of
data larger than the
For what it is worth, I ended up using Linux to dd the Solaris partition from
an identical machine.
I realize that ZFS is a huge step forward on a huge number of fronts, but the
boot process has got to improve, or else it should not be offered as a root
filesystem. Even in the bad old days of
did you follow the instructions for updating grub
after the image-update:
http://opensolaris.org/jive/thread.jspa?messageID=277
115tstart=0
Yes. Also there is no need to do the grub update when directly installed from
the os0811_95 LiveDVD. Further, compared to the USB stick, I don't
I agree with you Constantin that the sync is a performance problem, in the same
way i think in a NFS environment it is just *required*. If the sync can be
relaxed in a specific NFS environment, my first opinion is that the NFS is
not necessary on that environment in first place.
IMHO a
Hi,
As a part of the next stages of the time-slider project we are looking into
doing actual backups onto
removable media devices such as USB media. The goal is to be able to view
snapshots stored on the
media and merge these into the list of viewable snapshots in nautilus giving
the user a
On Tue, Oct 21, 2008 at 05:50:09AM -0700, Marcelo Leal wrote:
If i have many small files (smaller than 128K), i would not waste
time reading 128K? And after the ZFS has allocated a FSB of 64K for
example, if that file gets bigger, ZFS will use 64K blocks right?
ZFS uses the smallest
Well, it might be even more of a bodge than disabling the ZIL, but how about:
- Create a 512MB ramdisk, use that for the ZIL
- Buy a Micro Memory nvram PCI card for £100 or so.
- Wait 3-6 months, hopefully buy a fully supported PCI-e SSD to replace the
Micro Memory card.
The ramdisk isn't an
Hello Richard,
Wednesday, October 15, 2008, 6:39:49 PM, you wrote:
RE Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
exporting zvols as iscsi
Bah, I've done it again. I meant use it as a slog
device, not as the ZIL...
But the slog is the ZIL. formaly a *separate* intent log. What´s the matter? I
think everyone did understand. I think you did make a confusion some threads
before about ZIL and L2ARC. That is a different thing.. ;-)
On Wed, 2008-10-22 at 10:45 -0600, Neil Perrin wrote:
Yes: 6280630 zil synchronicity
Though personally I've been unhappy with the exposure that zil_disable has
got.
It was originally meant for debug purposes only. So providing an official
way to make synchronous behaviour asynchronous is
On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote:
If I turn zfs compression on, does the recordsize influence the
compressratio in anyway?
zfs conceptually chops the data into recordsize chunks, then compresses
each chunk independently, allocating on disk only the space needed to
store each
Hello there,
It´s not a wiki, but has many considerations about your question:
http://www.opensolaris.org/jive/thread.jspa?threadID=78841tstart=60
Leal.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
But the slog is the ZIL. formaly a *separate* intent log.
No the slog is not the ZIL!
Here's the definition of the terms as we've been trying to use them:
ZIL:
The body of code the supports synchronous requests, which writes
out to the Intent Logs
Intent Log:
A stable
Robert Milkowski wrote:
Hello Richard,
Wednesday, October 15, 2008, 6:39:49 PM, you wrote:
RE Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
Hi,
I recently had to reinstall OpenSolaris on my home server, after I managed to
break the install while fiddling around with the new package system.
Anyway, the day I wanted to do the reinstall one of my hard disks broke down
with a head crash, leaving the pool in the following state:
But the slog is the ZIL. formaly a *separate*
intent log.
No the slog is not the ZIL!
Ok, when you did write this:
I've been slogging for a while on support for separate intent logs (slogs)
for ZFS. Without slogs, the ZIL is allocated dynamically from the main pool.
You were talking
On 10/22/08 13:56, Marcelo Leal wrote:
But the slog is the ZIL. formaly a *separate*
intent log.
No the slog is not the ZIL!
Ok, when you did write this:
I've been slogging for a while on support for separate intent logs (slogs)
for ZFS.
Without slogs, the ZIL is allocated dynamically
[EMAIL PROTECTED]:/tmp# cat /var/svc/log/system-iscsitgt\:default.log
[ Oct 21 09:17:49 Enabled. ]
[ Oct 21 09:17:49 Executing start method (/lib/svc/method/svc-iscsitgt
start). ]
[ Oct 21 09:17:49 Method start exited with status 0. ]
[ Oct 21 17:02:12 Disabled. ]
[ Oct 21 17:02:12 Rereading
david lacerte wrote:
Oracle on ZFS best practice? docs? blogs? Any recent/new info related
to Running Oracle 10g and/or 11g on ZFS Solaris 10?
We try to keep the wikis up to date.
ZFS Best Practices Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS for
cg == Constantin Gonzalez [EMAIL PROTECTED] writes:
cg if a tar xvf were writing locally to a ZFS file system, the
cg writes wouldn't be synchronous either, so there's no point in
cg forcing NFS users to having a better
It's worse for NFS because breaking the commit/lease/batch
On Wed, 22 Oct 2008, Miles Nordin wrote:
I thought NFSv2 - NFSv3 was supposed to make this prestoserv, SSD,
battery-backed DRAM stuff not needed for good performance any more. I
guess not though.
The intent was to allow the server to be able to buffer up more
uncommitted data before the
As it happens, I'm currently involved with a project doing some performance
analysis for this... but it is currently a WIP. Comments below.
Robert Milkowski wrote:
Hello Adam,
Tuesday, October 21, 2008, 2:00:46 PM, you wrote:
ANC We're using a rather large (3.8TB) ZFS volume for our
Constantin Gonzalez wrote:
Hi,
On a busy NFS server, performance tends to be very modest for large amounts
of small files due to the well known effects of ZFS and ZIL honoring the
NFS COMMIT operation[1].
For the mature sysadmin who knows what (s)he does, there are three
possibilities:
[Default] On Tue, 21 Oct 2008 15:43:08 -0400, Bill Sommerfeld
[EMAIL PROTECTED] wrote:
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
I've a report that the mismatch between SQLite3's default block size
and ZFS' causes some performance problems for Thunderbird users.
I was seeing
On Wed, Oct 22, 2008 at 04:46:00PM -0400, Miles Nordin wrote:
I thought NFSv2 - NFSv3 was supposed to make this prestoserv, SSD,
battery-backed DRAM stuff not needed for good performance any more. I
guess not though.
There are still a number of operations in NFSv3 and NFSv4 which the
client
On Wed, Oct 22, 2008 at 11:05:09PM +0200, Kees Nuyt wrote:
[Default] On Tue, 21 Oct 2008 15:43:08 -0400, Bill Sommerfeld
[EMAIL PROTECTED] wrote:
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
I've a report that the mismatch between SQLite3's default block size
and ZFS'
On 10/22/08 13:56, Marcelo Leal wrote:
But the slog is the ZIL. formaly a *separate*
intent log.
No the slog is not the ZIL!
Ok, when you did write this:
I've been slogging for a while on support for
separate intent logs (slogs) for ZFS.
Without slogs, the ZIL is allocated
On Wed, Oct 22, 2008 at 04:31:43PM -0500, Nicolas Williams wrote:
On Wed, Oct 22, 2008 at 11:05:09PM +0200, Kees Nuyt wrote:
Just a remark:
Increasing the SQLite page_size while keeping the same
[default_]cache_size will effectively increase the amount of memory
allocated to the SQLite
Firmware upgrade was unsuccessful of the PERC raid controller.
I tested with Vista 64 bit initiator and it experienced the same issue as the
VMware initiators. If someone has a poweredge 1850/1900/1950 server that has
been successul please let me know what you've done or your hardware profile.
Hi Richard,
On Qua, 2008-10-22 at 14:04 -0700, Richard Elling wrote:
It is more important to use a separate disk, than to use a separate and fast
disk. Anecdotal evidence suggests that using a USB hard disk works
well.
While I don't necessarily disagree with your statement, please note that
Ricardo M. Correia wrote:
Hi Richard,
On Qua, 2008-10-22 at 14:04 -0700, Richard Elling wrote:
It is more important to use a separate disk, than to use a separate and fast
disk. Anecdotal evidence suggests that using a USB hard disk works
well.
While I don't necessarily disagree
Well the '/var/svc/log/system-iscsitgt\:default.log'
is NOT showing any core dumps, which is good, but
means that we need to look think deeper for the answer.
The 'iscsisnoop.d' output does looks similar to that
captured by Eugene over on the storage forum, but
Eugene only showed a short
Hi Tano
I will have a look at your snoop file.
(Tomorrow now, as it's late in the UK!)
I will send you my email address.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, 21 Oct 2008, Pramod Batni wrote:
Why does creating a new ZFS filesystem require enumerating all existing
ones?
This is to determine if any of the filesystems in the dataset are mounted.
Ok, that leads to another question, why does creating a new ZFS filesystem
require determining
53 matches
Mail list logo