Hi All,
I'm interested in exploring the logic that selects the vdev for a newly
created file. I've spent some time in the metaslab.c file but this appears
to be weighting slab selection within a vdev .not really the logic I'm
looking for. I expect vdev selection is a simple algorithm that
Lori,
I have 4 disks on a SAN that I have created one pool on (IBM-ES7), when I
unmount any of these mount points (zfs umount /downloads) and then go to
/downloads I can still see the data.. then when I do a zfs mount -a it says
cannot mount '/downloads': directory is not empty.
I can get it
Hello Louwtjie,
Tuesday, July 17, 2007, 10:20:03 AM, you wrote:
LB Hi
LB What is the general feeling for production readiness when it comes to:
LB ZFS
LB Oracle 10G R2
LB 6140-type storage
LB OLTP workloads
LB 1-3TB sizes
LB Running UFS with directio is stable, fast and one can sleep at
When all mounts are unmounted and I do a du -sh /*|more I still see the space
allocated. I would think this is filling up my root Filesystem based on this
output.
3.7G /downloads
Your thoughts..
This message posted from opensolaris.org
___
Hi Duff,
... I expect vdev selection is a simple algorithm
that rotates through vdevs with some additional logic to handle the case
where the vdev is full.
The vdev selection policy is in metaslab_alloc_dva(). You are right that
it is a simple algorithm that rotates through the vdevs. Apart
There is an open issue/bug with ZFS and EMC PowerPath for Solaris 10 in x86/x64
space. My customer encountered the issue back in April 2007 and is awaiting
the patch. We're expecting an update (hopefully a patch) by the end of July
2007.
As I recall, it did involve CX arrays and trespass
We have a Sun v890, and I'm interested converting existing ZFS zpool from
c#t#d# to MPxIO.
% zpool status
pool: data
state: ONLINE
status: ONLINE
scrub: scrub completed with 0 errors on Sun Jul 15 10:58:33 2007
config:
NAMESTATE READ WRITE CKSUM
data
Hello zfs-discuss,
Not so long ago I tried to put zfs on my usb drive in my office so I
can put some data, take home and copy from it to my home arch
server. The problem was that in home I do have S10U3 while in office
snv_66. Well, if I do create that pool on snv I won't be able to
Robert -
This is covered by PSARC 2007/342 and is currently in development.
http://www.opensolaris.org/os/community/arc/caselog/2007/342/
- Eric
On Wed, Jul 18, 2007 at 07:16:35PM +0100, Robert Milkowski wrote:
Hello zfs-discuss,
Not so long ago I tried to put zfs on my usb drive in my
Hello Stuart,
Looks like crash dumped went ok.
Check logs after system booted up again if there's a warning that
there's no enough space in /var/crash/x4500gc to save crashdump.
When using zfs on a file servers crashdumps usually will be almost
of server's memory size...
Eventually
Hello David,
Saturday, July 14, 2007, 2:01:23 AM, you wrote:
DS Well, the zfs receive process finally died, and now my zfs list works just
fine.
DS If there is a better way to capture what is going on, please let
DS me know and I can duplicate the hang.
I can observer similar things from
Hello Joel,
Tuesday, June 26, 2007, 10:13:54 PM, you wrote:
JM Hi folks,
JM So the expansion unit for the 2500 series is the 2501.
JM The back-end drive channels are SAS.
JM Currently it is not supported to connect a 2501 directly to a SAS HBA.
But does it work? Has anyone actually tested it?
Hello Eric,
Wednesday, July 18, 2007, 7:18:44 PM, you wrote:
ES Robert -
ES This is covered by PSARC 2007/342 and is currently in development.
ES http://www.opensolaris.org/os/community/arc/caselog/2007/342/
Great! Thanks for info.
--
Best regards,
Robert
Hi Robert,
It should work. We have not had the time or resources to test it (we are
busy qualifying the 2530 (SAS array) with an upcoming MPxIO enabled MPT
driver and SATA drive support).
I do not know if MPxIO will claim raw drives or nottypically there
are vendor specific modules that
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
Neil.
___
zfs-discuss mailing list
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
So, how did you get a pci Micro
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
So, how
Hello
I have a fresh Solaris 10/6 install. My disk have 20Gb but I used only
10GB for the base solaris installation and home directory.
I have 10GB free without any partition.
I would like to use my free space to store my zones using zfs filesystem.
In example:
/zones
/zone/zona1
/zone/zone2
You can find these at:
http://www.umem.com/Umem_NVRAM_Cards.html
And the one Neil was using in particular:
http://www.umem.com/MM-5425CN.html
- Eric
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin
On Wed, Jul 18, 2007 at 01:00:22PM -0700, Eric Schrock wrote:
You can find these at:
http://www.umem.com/Umem_NVRAM_Cards.html
And the one Neil was using in particular:
http://www.umem.com/MM-5425CN.html
They only sell to OEMs. Our Sun VAR looked for one as well but they
cannot find
On Wed, Jul 18, 2007 at 03:06:07PM -0500, Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:00:22PM -0700, Eric Schrock wrote:
You can find these at:
http://www.umem.com/Umem_NVRAM_Cards.html
And the one Neil was using in particular:
http://www.umem.com/MM-5425CN.html
They only
DRM wrote:
Lori,
I don't have an answer for this. In your first description
of the problem, it sounded like this had something to do
with a zfs root file system (which is my area of work), but
it doesn't.
Maybe someone else can help you here. My suggestion
in the meantime is to show a
What are your thoughts or recommendations on having a zpool made up of
raidz groups of different sizes? Are there going to be performance issues?
For example:
pool: testpool1
state: ONLINE
scrub: none requested
config:
NAMESTATE READ
Nan Liu wrote:
We have a Sun v890, and I'm interested converting existing ZFS zpool from
c#t#d# to MPxIO.
I've done some testing with a v880 and ZFS seems to support MPxIO with no
issues:
correct.
After running # stmsboot -e, MPxIO will be enabled for fp on the v890, but
I'm not sure
Erm, yeah, sorry about that (previous stupid questions). I wrote it before
having my first cup of coffee... Thanks for the details, though. If you guys
have any updates, please, drop a link to new info in this thread (I'll do the
same if I find out anything more), as I have it on my watch
Hi folks,
One of the things I'm really hanging out for is the ability to evacuate
the data from a zpool device onto the other devices and then remove the
device. Without mirroring it first etc. The zpool would of course shrink
in size according to how much space you just took away.
Our
Mark Ashley wrote:
At the moment once a device is in a zpool, it's stuck there. That's a
problem. What sort of time frame are we looking at until it's possible
to remove LUNs from zpools?
It might be in OpenSolaris around the end of the (calendar) year. That's
just a rough guess though.
On 18-Jul-07, at 8:38 PM, Scott Lovenberg wrote:
Erm, yeah, sorry about that (previous stupid questions). I wrote
it before having my first cup of coffee... Thanks for the details,
though. If you guys have any updates, please, drop a link to new
info in this thread
I hate to be a
OK - here's some info for those of you just starting out with zfs from the
coding/building level. I struggled for many days walking down the path of
install specific snv_xx release - build code of snv_xx release with nightly -
install kernel only with cap eye Install.
While this all worked
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
30 matches
Mail list logo