Hi, Mike,
It's like 6452872, it need enough space for 'zfs promote'
- Regards,
Mike Gerdts wrote:
I needed to free up some space to be able to create and populate a new
upgrade. I was caught off guard by the amount of free space required
by zfs promote.
bash-3.2# uname -a
SunOS indy2
Likewise. Just plain doesn't work.
Not required though, since the command-line is okay and way powerful ;)
And there are some more interesting challenges to work on, so I didn't push
this problem any more yet.
This message posted from opensolaris.org
Richard Elling schrieb:
Tobias Exner wrote:
Hi John,
I've done some tests with a SUN X4500 with zfs and MAID using the
powerd of Solaris 10 to power down the disks which weren't access for
a configured time. It's working fine...
The only thing I run into was the problem that it took
I've been reading, with great (personal/professional) interest about
Sun getting very serious about SSD-equipping servers as a standard
feature in the 2nd half of this year. Yeah! Excellent news - and
it's nice to see Sun lead, rather than trail the market! Those of us,
who are ZFS zealots,
Hi Al,
Sorry, but leading the market is not right at this point.
www.superssd.com has the answer to all those questions about SSD and
reliability/speed for many years..
But I'm with you. I'm looking forward the coming products of SUN
concerning SSD..
btw: it's seems to me that this thread
On Wed, Jun 11, 2008 at 3:59 AM, Tobias Exner [EMAIL PROTECTED] wrote:
Hi Al,
Sorry, but leading the market is not right at this point.
www.superssd.com has the answer to all those questions about SSD and
reliability/speed for many years..
But I'm with you. I'm looking forward the coming
On Jun 11, 2008, at 1:16 AM, Al Hopper wrote:
But... if you look
broadly at the current SSD product offerings, you see: a) lower than
expected performance - particularly in regard to write IOPS (I/O Ops
per Second)
True. Flash is quite asymmetric in its performance characteristics.
That said,
The reliability of flash increasing alot if "wear leveling" is
implemented and there's the capability to build a raid over a couple of
flash-modules ( maybe automatically by the controller ).
And if there are RAM-modules as a cache infront of the flash the most
problems will be solved
Tobias Exner wrote:
The reliability of flash increasing alot if wear leveling is
implemented and there's the capability to build a raid over a couple of
flash-modules ( maybe automatically by the controller ).
And if there are RAM-modules as a cache infront of the flash the most
problems
On Sat, 7 Jun 2008, Mattias Pantzare wrote:
If I need to count useage I can use du. But if you
can implement space
usage info on a per-uid basis you are not far from
quota per uid...
That sounds like quite a challenge. UIDs are just
numbers and new
ones can appear at any time.
btw: it's seems to me that this thread is a little
bit OT.
I don't think its OT - because SSDs make perfect
sense as ZFS log
and/or cache devices. If I did not make that clear
in my OP then I
failed to communicate clearly. In both these roles
(log/cache)
reliability is of the utmost
Hi
after updating to svn_90 (several retries before I patched pkg) I was left with
the following
NAME USED AVAIL REFER
MOUNTPOINT
rpool 9.87G 24.6G62K
/rpool
[EMAIL
On Tue, Jun 10, 2008 at 11:33:36AM -0700, Wyllys
Ingersoll wrote:
Im running build 91 with ZFS boot. It seems that
ZFS will not allow
me to add an additional partition to the current
root/boot pool
because it is a bootable dataset. Is this a known
issue that will be
fixed or a
I'm not even trying to stripe it across multiple disks, I just want to add
another partition (from the same physical disk) to the root pool. Perhaps that
is a distinction without a difference, but my goal is to grow my root pool, not
stripe it across disks or enable raid features (for now).
On Wed, Jun 11, 2008 at 12:58 AM, Robin Guo [EMAIL PROTECTED] wrote:
Hi, Mike,
It's like 6452872, it need enough space for 'zfs promote'
Not really - in 6452872 a file system is at its quota before the
promote is issued. I expect that a promote may cause several KB of
metadata changes that
Hi all, I'm new to the list and I thought I'd start out on the right foot. ZFS
is great, but I have a couple questions
I have a Try-n-buy x4500 with one large zfs pool with 40 1TB drives in it. The
pool is named backup.
Of this pool, I have a number of volumes.
backup/clients
Wyllys Ingersoll wrote:
I'm not even trying to stripe it across multiple disks, I just want to add
another partition (from the same physical disk) to the root pool. Perhaps
that is a distinction without a difference, but my goal is to grow my root
pool, not stripe it across disks or enable
I'm not even trying to stripe it across multiple
disks, I just want to add another partition (from the
same physical disk) to the root pool. Perhaps that
is a distinction without a difference, but my goal is
to grow my root pool, not stripe it across disks or
enable raid features (for now).
Yeah. The command line works fine. Thought it to be a bit curious that there
was an issue with the HTTP interface. It's low priority I guess because it
doesn't impact the functionality really.
Thanks for the responses.
This message posted from opensolaris.org
If your worried about the bandwidth limitations of putting something like the
supermicro card in a pci slot how about using an active riser card to convert
from PCI-E to PCI-X. One of these, or something similar:
http://www.tyan.com/product_accessories_spec.aspx?pid=26
on sale at
On Wed, Jun 11, 2008 at 10:18 AM, Lee [EMAIL PROTECTED] wrote:
If your worried about the bandwidth limitations of putting something like
the supermicro card in a pci slot how about using an active riser card to
convert from PCI-E to PCI-X. One of these, or something similar:
Richard L. Hamilton wrote:
Whatever mechanism can check at block allocation/deallocation time
to keep track of per-filesystem space (vs a filesystem quota, if there is one)
could surely also do something similar against per-uid/gid/sid quotas. I
suspect
a lot of existing functions and data
On Wed, 11 Jun 2008, Al Hopper wrote:
disk drives. But - based on personal observation - there is a lot of
hype surrounding SSD reliability. Obviously the *promise* of this
technology is higher performance and *reliability* with lower power
requirements due to no (mechanical) moving parts.
I don't think so, not all of them anyway. They also sell ones that have a
proprietary goldfinger, which obviously would not work.
The spec does not mention any specific restrictions, just lists the interface
types (but it is fairly breif), and you can certianly buy PCI - PCI-E generic
On Jun 11, 2008, at 11:35 AM, Bob Friesenhahn wrote:
On Wed, 11 Jun 2008, Al Hopper wrote:
disk drives. But - based on personal observation - there is a lot of
hype surrounding SSD reliability. Obviously the *promise* of this
technology is higher performance and *reliability* with lower
On Wed, 11 Jun 2008, Richard L. Hamilton wrote:
But if you already have the ZAP code, you ought to be able to do
quick lookups of arbitrary byte sequences, right? Just assume that
a value not stored is zero (or infinity, or uninitialized, as applicable),
and you have the same functionality
Hi All ;
Every NAND based SSD HDD have some ram. Consumer grade products will have
smaller not battery protected ram with a smaller number of prallel working nand
chips and a slower cpu to distribute the load. Also consumer product will have
less number of spare cells.
Enterprise SSD's are
On Wed, Jun 11, 2008 at 8:21 AM, Tim [EMAIL PROTECTED] wrote:
Are those universal though? I was under the impression it had to be
supported by the motherboard, or you'd fry all components involved.
There are PCI/PCI-X to PCI-e bridge chips available (as well as PCI-e
to AGP) and they're part
On Wed, Jun 11, 2008 at 10:35 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Wed, 11 Jun 2008, Al Hopper wrote:
disk drives. But - based on personal observation - there is a lot of
hype surrounding SSD reliability. Obviously the *promise* of this
technology is higher performance and
On Wed, Jun 11, 2008 at 4:31 AM, Adam Leventhal [EMAIL PROTECTED] wrote:
On Jun 11, 2008, at 1:16 AM, Al Hopper wrote:
But... if you look
broadly at the current SSD product offerings, you see: a) lower than
expected performance - particularly in regard to write IOPS (I/O Ops
per Second)
Luckily, my system had a pair of identical, 232GB disks. The 2nd wasn't yet
used, so by juggling mirrors (create 3 mirrors, detach the one to change,
etc...), I was able to reconfigure my disks more to my liking - all without a
single reboot or loss of data. I now have 2 pools - a 20GB root
Your key problem is going to be:
Will Sun use SLC or MLC?
From what I have read the trend now is towards MLC chips which have much lower
number of write cycles but are cheaper and more storage. So then they end up
layering ECC and wear-levelling on to address this shortened life-span. A
On Wed, 2008-06-11 at 07:40 -0700, Richard L. Hamilton wrote:
I'm not even trying to stripe it across multiple
disks, I just want to add another partition (from the
same physical disk) to the root pool. Perhaps that
is a distinction without a difference, but my goal is
to grow my root
This is one of those issues, where the developers generally seem to think that
old-style quotas is legacy baggage. And that people running large
home-directory sort of servers with 10,000+ users are a minority that can
safely be ignored.
I can understand their thinking.However it does
I had a similar configuration until my recent re-install to snv91. Now I am
have just 2 ZFS pools - one for root+boot (big enough to hold multiple BEs and
do LiveUpgrades) and another for the rest of my data.
-Wyllys
This message posted from opensolaris.org
see: http://bugs.opensolaris.org/view_bug.do?bug_id=6700597
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jun 11, 2008 at 01:51:17PM -0500, Al Hopper wrote:
I think that I'll (personally) avoid the initial rush-to-market
comsumer level products by vendors with no track record of high tech
software development - let alone those who probably can't afford the
PhD level talent it takes to get
A Darren Dunham wrote:
On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
However, some apps will probably be very unhappy if i/o takes 60 seconds
to complete.
It's certainly not uncommon for that to occur in an NFS environment.
All of our applications seem to hang on
Thanks, Matt. Are you interested in feedback on various questions regarding
how to display results? On list or off? Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
So I decided to test out failure modes of ZFS root mirrors.
Installed on a V240 with nv90. Worked great.
Pulled out disk1, then replaced it and attached again, resilvered, all good.
Now I pull out disk0 to simulate failure there. OS up and running fine, but
lots of error message about SYNC
Sounds correct to me. The disk isn't sync'd so boot should fail. If
you pull disk0 or set disk1 as the primary boot device what does it
do? You can't expect it to resliver before booting.
On 6/11/08, Vincent Fox [EMAIL PROTECTED] wrote:
So I decided to test out failure modes of ZFS root
Glaser, David [EMAIL PROTECTED] writes:
Hi all, I?m new to the list and I thought I?d start out on the right
foot. ZFS is great, but I have a couple questions?.
I have a Try-n-buy x4500 with one large zfs pool with 40 1TB drives in
it. The pool is named backup.
Of this pool, I have a
Ummm, could you back up a bit there?
What do you mean disk isn't sync'd so boot should fail? I'm coming from UFS
of course where I'd expect to be able to fix a damaged boot drive as it drops
into a single-user root prompt.
I believe I did try boot disk1 but that failed I think due to prior
Vincent Fox wrote:
So I decided to test out failure modes of ZFS root mirrors.
Installed on a V240 with nv90. Worked great.
Pulled out disk1, then replaced it and attached again, resilvered, all good.
Now I pull out disk0 to simulate failure there. OS up and running fine, but
lots of
Vincent Fox wrote:
Ummm, could you back up a bit there?
What do you mean disk isn't sync'd so boot should fail? I'm coming from
UFS of course where I'd expect to be able to fix a damaged boot drive as it
drops into a single-user root prompt.
I believe I did try boot disk1 but that failed
Torrey McMahon wrote:
A Darren Dunham wrote:
On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
However, some apps will probably be very unhappy if i/o takes 60 seconds
to complete.
It's certainly not uncommon for that to occur in an NFS environment.
46 matches
Mail list logo