Hi Chris,
Great to have such a long list of suggestions!
Though I think the information is a bit on the short side, hard for
mentor/mentee to pick up.
Suggestion to create a short description like that here,
http://live.gnome.org/SummerOfCode2009/Ideas
Title: 1liner
o Benefits: 1 liners
o
Caution: I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older. It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator
Thank you all for reply. I will do the recommendation and see if there is
any change in performance.
I had seen the documents in the link but I needed to make sure that we can
not do anything to improve the performance before going to storage guys.
Thanks again,
Vahid.
On Tue, Mar 3, 2009 at
On 4-Mar-09, at 2:07 AM, Stephen Nelson-Smith wrote:
Hi,
I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre. I
based this on an X2200 + J4400, Solaris 10 + rsync.
This was enthusiastically received, to the
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On Tue, 3 Mar 2009, Miles Nordin wrote:
I would like 64-bit hardware with ECC, 8GB RAM, and a good Ethernet
chip, that can run both Linux and Solaris. I do not plan to use the
onboard SATA. So far I'm having nasty problems with an nForce
On Wed, 4 Mar 2009, Stephen Nelson-Smith wrote:
The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI. I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.
On Tue, Mar 03, 2009 at 11:35:40PM +0200, C. Bergström wrote:
Here's more or less what I've collected...
[..]
10) Did I miss something..
I suppose my RFE for two-level ZFS should be included, unless nobody
intends to attach a ZFS file server to a SAN with ZFS on application
servers.
--
On Wed, Mar 04, 2009 at 10:59:04AM +1100, Julius Roberts wrote:
However I would expect that if you could present 8 raid0 luns to
the host then that should be at least a decent config to start
using for ZFS.
I can confirm that we are doing that here (with 3 drives) and it's
been fine
On Wed, Mar 4, 2009 at 5:29 AM, Julius Roberts hooliowobb...@gmail.com wrote:
I would like to hear if anyone is using ZFS with this card and how you set
it up, and what, if any, issues you've had with that set up.
However I would expect that if you could present 8 raid0 luns to
the host then
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/)
http://opensolaris.org/os/project/nfsv41/%29 is adding a new DMU
object set type which is used on the pNFS data server to store pNFS
stripe DMU objects. A pNFS dataset gets created with the zfs create
command
On Wed, Mar 4, 2009 at 9:52 AM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Tue, Mar 03, 2009 at 11:35:40PM +0200, C. Bergström wrote:
Here's more or less what I've collected...
[..]
10) Did I miss something..
I suppose my RFE for two-level ZFS should be included, unless nobody
Vahid Moghaddasi wrote:
Thank you all for reply. I will do the recommendation and see if there
is any change in performance.
I had seen the documents in the link but I needed to make sure that we
can not do anything to improve the performance before going to storage
guys.
You may not have
nw == Nicolas Williams nicolas.willi...@sun.com writes:
nw IIRC Jeff Bonwick has said on this list that ubberblock
nw rollback on import is now his higher priority. So working on
nw (8) would be duplication of effort.
well...if your recovery tool worked by using an older
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
Not that my opinion counts for much, but I wasn't deaf to it---I did
respond.
I thought it was kind of based on mistaken understanding. It included
this strangeness of the upper ZFS
I don't know if anyone has noticed that the topic is google summer of
code. There is only so much that a starving college student can
accomplish from a dead-start in 1-1/2 months. The ZFS equivalent of
eliminating world hunger is not among the tasks which may be
reasonably accomplished, yet
Bob Friesenhahn wrote:
I don't know if anyone has noticed that the topic is google summer of
code. There is only so much that a starving college student can
accomplish from a dead-start in 1-1/2 months. The ZFS equivalent of
eliminating world hunger is not among the tasks which may be
Jacob Ritorto wrote:
Caution: I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older. It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
On 4-Mar-09, at 1:28 PM, Bob Friesenhahn wrote:
I don't know if anyone has noticed that the topic is google summer
of code. There is only so much that a starving college student
can accomplish from a dead-start in 1-1/2 months. The ZFS
equivalent of eliminating world hunger is not among
Right on the money there Bob. Without knowing more detail about the
clients workload, it would be hard to
advise either way. I would imagine based purely on the small amount of
info around the clients apps and
workload that NFS would most likely be the appropriate solution on top
of ZFS. You
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
that link suggests that this is a problem with a dirty export:
Yes, a loss of power should mean there was no clean export.
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
maybe try importing on system A again,
C. Bergström wrote:
10) Did I miss something..
T10 DIF support in zvols
T10 UNMAP/thin provisioning support in zvols
proportional scheduling for storage performance
slog and L2ARC on the same SSD
These are probably difficult but hopefully not world hunger level.
Wes Felter -
Not too sure what this option needs as a value but the man page suggests
that the keywork current should work.
When I try a dry run with -n I see this :
# zpool create -n -o autoreplace=on -o version=current -m legacy \
fibre00 \
mirror c8t2004CFAC0E97d0 c8t202037F859F1d0 \
mirror
Wes Felter wrote:
proportional scheduling for storage performance
slog and L2ARC on the same SSD
The current scheduler is rather simple, there might be room for
improvements -- but that may be a rather extended research topic.
But I'm curious as to why you would want to put both the slog and
On Wed, Mar 04, 2009 at 02:16:53PM -0600, Wes Felter wrote:
T10 UNMAP/thin provisioning support in zvols
That's probably simple enough, and sufficiently valuable too.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Richard Elling wrote:
Wes Felter wrote:
proportional scheduling for storage performance
slog and L2ARC on the same SSD
The current scheduler is rather simple, there might be room for
improvements -- but that may be a rather extended research topic.
Yes. For GSoC it would probably be wise
Hi Rich,
On Mar 4, 2009, at 9:30 AM, Richard Morris - Sun Microsystems -
Burlington United States wrote:
Quick Background:
The pNFS project (http://opensolaris.org/os/project/nfsv41/) is
adding a new DMU object set type which is used on the pNFS data
server to store pNFS stripe DMU
On Wed, Mar 04, 2009 at 02:13:51PM -0700, Lisa Week wrote:
(pnfs-17-21:/home/lisagab):6 % zfs list -o
name,type,used,avail,refer,mountpoint
NAME TYPE USED AVAIL REFER
MOUNTPOINT
rpool filesystem30.0G 37.0G 32.5K /rpool
On Mar 4, 2009, at 3:10 PM, Nicolas Williams wrote:
On Wed, Mar 04, 2009 at 02:13:51PM -0700, Lisa Week wrote:
(pnfs-17-21:/home/lisagab):6 % zfs list -o
name,type,used,avail,refer,mountpoint
NAME TYPE USED AVAIL REFER
MOUNTPOINT
rpool
2009/3/4 Tim t...@tcsac.net:
I know plenty of home users would like the ability to add a single disk to
a raid-z vdev in order to grow a disk at a time.
+1 for that.
--
Kind regards, Jules
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
2009/3/5 Brian Hechinger wo...@4amlunch.net:
Even though it probably really doesn't matter since you only have a single
disk in each raid0, what did you set the PERC's stripe size to? (I can't
think of what terminology is actually used for it and don't have a PERC in
front of me to check on
On Wed, Mar 04, 2009 at 03:49:54PM -0700, Lisa Week wrote:
My (humble) opinion is: Even though it is hard to tell if a dataset is
a filesystem or a zvol now, it doesn't mean we can't make it better...
Agreed.
___
zfs-discuss mailing list
On Wed, March 4, 2009 15:13, Lisa Week wrote:
Does anyone have insight into any known
problems it may cause to add the type property to the default output
or why it was left out in the first place?
The obvious problem that always comes up in this situation is existing
scripts that parse the
On Wed, March 4, 2009 16:53, Julius Roberts wrote:
2009/3/4 Tim t...@tcsac.net:
I know plenty of home users would like the ability to add a single
disk to
a raid-z vdev in order to grow a disk at a time.
+1 for that.
In theory I'd like that a lot.
I'm now committed to two two-disk mirrors
Hello George,
Tuesday, March 3, 2009, 3:01:43 PM, you wrote:
GW Matthew Ahrens wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
automated installgrub when mirroring an rpool
GW I'm working on
Hello Thomas,
Saturday, February 28, 2009, 10:14:20 PM, you wrote:
TW I would really add : make insane zfs destroy -r| poolname as
TW harmless as zpool destroy poolname (recoverable)
TW zfs destroy -r| poolname|/filesystem
TW this should behave like that:
TW o snapshot the filesystem
On Wed, Mar 4, 2009 at 12:18 PM, Richard Elling richard.ell...@gmail.comwrote:
Vahid Moghaddasi wrote:
Thank you all for reply. I will do the recommendation and see if there is
any change in performance.
I had seen the documents in the link but I needed to make sure that we can
not do
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
Not that my opinion counts for much, but I wasn't deaf to it---I did
respond.
I appreciate that.
I thought it was
On 4-Mar-09, at 7:35 PM, Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
Not that my opinion counts for much, but I wasn't deaf to it---I did
Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
Not that my opinion counts for much, but I wasn't deaf to it---I did
respond.
I appreciate that.
I
On Wed, 2009-03-04 at 12:49 -0800, Richard Elling wrote:
But I'm curious as to why you would want to put both the slog and
L2ARC on the same SSD?
Reducing part count in a small system.
For instance: adding L2ARC+slog to a laptop. I might only have one slot
free to allocate to ssd.
IMHO the
Hm - a ZilArc??
Or, slarc?
Or L2ArZi
I'm tried something sort of similar to this when fooling around, adding
different *slices* for ZIL / L2ARC but as I'm too poor to afford good
SSD's my resolut was poor at beat... ;)
Having ZFS manage some 'arbitrary fast stuff' and sorting out it's own
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote:
Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
It's a simply a consequence of ZFS's end-to-end
Gary Mills wrote:
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote:
Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
It's a simply a consequence
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
that link suggests that this is a problem with a dirty export:
Yes, a loss of power should mean there was no clean export.
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
maybe try importing on system A again,
additional comment below...
Kyle Kakligian wrote:
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
that link suggests that this is a problem with a dirty export:
Yes, a loss of power should mean there was no clean export.
On Mon, Mar 2, 2009 at 8:30 AM, Blake
45 matches
Mail list logo