Joerg Schilling wrote:
Darren J Moffat [EMAIL PROTECTED] wrote:
There is a difference though as far as I can tell. Sometimes on Solaris
we have p? for fdisk partitioning included and sometimes we don't;
similarly we sometimes don't have t? for target. Personally I'd prefer
us to be
The question put forth is whether the ZFS 128K blocksize is sufficient
to saturate a regular disk. There is great body of evidence that shows
that the bigger the write sizes and matching large FS clustersize lead
to more throughput. The counter point is that ZFS schedules it's I/O
like nothing
Gregory Shaw writes:
I really like the below idea:
- the ability to defragment a file 'live'.
I can see instances where that could be very useful. For instance,
if you have multiple LUNs (or spindles, whatever) using ZFS, you
could re-optimize large files to spread
We're aiming for a release of bootable zfs in update 4.
So it won't be available in SXCR very soon. However,
we might be able to supply some parts of it, or at least
some improvements to the manual method, via Tabriz's blog
before then. We'll have to look at it.
Lori
Dick Davies wrote:
Hi,
There are OpenSolaris project proposals from students that need a ZFS
aware mentor. Both need a mentor that understands the ZFS code base.
This does NOT need to be a Sun Employee and I know there are a growing
number of people out there that are familiar with the code base.
One proposal is
Rich, correct me if I'm wrong, but here's the scenario I was thinking
of:
- A large file is created.
- Over time, the file grows and shrinks.
The anticipated layout on disk due to this is that extents are
allocated as the file changes. The extents may or may not be on
multiple spindles.
The problem I see with sequential access jump all over the place is
that this increases the utilization of the disks -
over the years disks have become even faster for sequential access,
whereas random access (as they have
to move the actuator) has not improved at the same pace - this is what
On Sat, May 13, 2006 at 03:59:00PM -0400, Dennis Clarke wrote:
Inside the foo zone I see that /etc/default/fs says :
LOCAL=ufs
Shouldn't this be LOCAL=zfs ? Or will that cause problems ?
All this determines is the default filesystem type for mount(1M), fsck(1M),
etc. Given that you
Nicolas Williams wrote:
On Sat, May 13, 2006 at 08:23:55AM +0200, Franz Haberhauer wrote:
Given that ISV apps can be only changed by the ISV who may or may not be
willing to
use such a new interface, having a no cache property for the file - or
given that filesystems
are now really cheap
On Mon, May 15, 2006 at 10:13:51AM -0700, Jonathan Adams wrote:
All this determines is the default filesystem type for mount(1M), fsck(1M),
etc. Given that you use zfs(1M) for all that kind of manipulation,
it seems like this is not a huge deal.
Note that if you are a fan of legacy
Franz Haberhauer wrote:
This would work technically, but wether ISVs are willing to support such
usage is a different
topic (there may be startup scripts involved making it a little tricky
to pass an library path
to the app).
Yet another reason to start the applications from SMF.
--
Darren
The reason this is pertinent to ZFS is because you can do things with a
ZFS file that you can't with other file systems. In particular, with
ZFS you can use fcntl to free space from the file. For example, a zfs
file can be used quite nicely to implement a circular queue. Instead of
having to
On Mon, May 15, 2006 at 07:16:38PM +0200, Franz Haberhauer wrote:
Nicolas Williams wrote:
Yes, but remember, DB vendors have adopted new features before -- they
want to have the fastest DB. Same with open source web servers. So I'm
a bit optimistic.
Yes, but they usually adopt it only
On Mon, May 15, 2006 at 11:17:17AM -0700, Bart Smaalders wrote:
Perhaps an fadvise call is in order?
We already have directio(3C).
(That was a surprise for me also.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi there
Can a ZFS root participate in Live Upgrade, that is does luupgrade
understand a ZFS root?
T
Tabriz Leman wrote:
All,
For those who haven't already gone through the painful manual process
of setting up a ZFS Root, Tim Foster has put together a script. It is
available on his
Casper == Casper Dik [EMAIL PROTECTED] writes:
Casper How can I configure it to reject non-member mail?
Go to the Mailman list admin page. The control is under Privacy
options-Sender filters (Action to take for postings from
options-non-members...).
Never noticed these options; thanks.
no, luupgrade does not yet support zfs root. That
support will come with the rest of the zfs install
support.
Lori
Terry Smith wrote:
Hi there
Can a ZFS root participate in Live Upgrade, that is does luupgrade
understand a ZFS root?
T
Tabriz Leman wrote:
All,
For those who haven't
On 15/05/06, Tabriz Leman [EMAIL PROTECTED] wrote:
For those who haven't already gone through the painful manual process of
setting up a ZFS Root, Tim Foster has put together a script. It is
available on his blog (http://blogs.sun.com/roller/page/timf/20060425).
I haven't tried it out, but am
It appears that if you follow this sequence, importing a pool becomes
impossible:
1) offline a disk
2) replace the same disk
3) export the pool
It would seem a normal course of events to 1) offline a failing disk, 2) wait
for FedEx to get a new drive to you 3) replace the drive, and go along
Yes, this is a known issue recently discovered ourselves (CR#6425111).
I'm looking into it now. Hopefully it'll be fixed in build 41, but
definiely by build 42.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
___
Ok - great. Thanks Eric.
On another, semi-related note... it would be convenient to be able to display
the pool ID number using zpool. I'm imagining some kind of 'zpool info pool'
command with one or two levels of verbosity. Could be useful when planning
export/import operations for pools, or
21 matches
Mail list logo