From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Doug Breneman
Sent: Monday, April 06, 2009 10:47 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: File System - If you had everything, where would you put
it?
Hi Gary,
Dave Jones wrote
Hi Gary,
Dave Jones wrote:
Note that in z/VM 5.4, a DCSS can live above the 31 bit bar, but it is
still limited to the 2GB size.
The DCSS above 2G support that went into z/VM 5.4 not only allowed them
above the 2G line, but also allowed them to be 'concatenated' to appear to
the a SLES guest to
On Saturday, 04/04/2009 at 11:01 EDT, "Gary M. Dennis"
wrote:
> Based on the response the assumption *seems* to be that data spaces are
> being considered for the shared data. That is not the case.
>
> I interpret the diag x'248' to mean that the source ALET can (also) be
that
> of the host p
On Friday, 04/03/2009 at 07:54 EDT, Jeff Savit wrote:
> Okay, got it. Ultra-traditional thing to do is to use DCSS for data
visible
> to multiple virtual machines, and use IUCV or (antique) VMCF as a
> shoulder-tap so they can communicate with one another.
z/VM's GCS was built on precisely this
Based on the response the assumption *seems* to be that data spaces are
being considered for the shared data. That is not the case.
I interpret the diag x'248' to mean that the source ALET can (also) be that
of the host primary address space containing the data (<=a(31 bit)).
On 4/4/09 5:34 A
Hi, Gary.
Using a shared DCSS, along with some sort of signaling mechanism (IUCV?)
between the server and clients, should work. But be aware that the
maximum size of a DCSS is 2GB, which isn't all that large for file
systems these days.
Note that in z/VM 5.4, a DCSS can live above the 31 bit
To set things 100% straigth:
DAT ON users cannot create/access **VM** Data Spaces; they can create
their own Data Spaces, that -unlike VM Data Spaces- can however not be
shared with other virtual machines.
2009/4/3 Alan Altmark :
> Just in case you are trying to connect Diag 0x248 with "push data"
On Fri, 3 Apr 2009 18:32:10 -0500, Gary M. Dennis
wrote:
>As it is written. Guests pull from the server on read requests and ser
vers
>pull from the guests on write requests.
>
>We seem to be missing an interrupt sequence don't we?
>
>Gary
Okay, got it. Ultra-traditional thing to do is to use
As it is written. Guests pull from the server on read requests and servers
pull from the guests on write requests.
We seem to be missing an interrupt sequence don¹t we?
Gary
On 4/3/09 6:15 PM, "Jeff Savit" wrote:
> Do you mean 'pull' or 'poll' on read, or 'push' on write? :-)
>
> In any ca
Do you mean 'pull' or 'poll' on read, or 'push' on write? :-)
In any case, Alan is right, and the lowest latency way for virtual machin
es
to share data is with a DCSS.
cheers, Jeff
On Fri, 3 Apr 2009 16:47:58 -0500, Gary M. Dennis
wrote:
>Something along these lines
>
>Guests pull on read
>S
Something along these lines
Guests pull on read
Servers pull on write
Async only
--. .- .-. -.--
Gary Dennis
Mantissa
0 ... living between the zeroes ... 0
On 4/3/09 4:26 PM, "Alan Altmark" wrote:
> On Friday, 04/03/2009 at 03:38 EDT, "Gary M. Dennis"
> wrote:
>> What I was trying to de
On Friday, 04/03/2009 at 03:38 EDT, "Gary M. Dennis"
wrote:
> What I was trying to determine if there was a way to use ZFS on
OpenSolaris
> System z as a high speed space management vehicle while bypassing
conventional
> transport layers? For example, let?s say there existed a way to push
Either would work, but a shared address space would seem to be the most
lightweight approach.
On 4/3/09 3:57 PM, "Dave Jones" wrote:
> Would you consider a mechanism like IUCV or shared data address spaces
> as acceptable methods to send data back and forth between z/VOS guests
> and a file serv
Would you consider a mechanism like IUCV or shared data address spaces
as acceptable methods to send data back and forth between z/VOS guests
and a file server of some sort?
Gary M. Dennis wrote:
Jeff,
What I was trying to determine if there was a way to use ZFS on OpenSolaris
System z as a
Jeff,
What I was trying to determine if there was a way to use ZFS on OpenSolaris
System z as a high speed space management vehicle while bypassing
conventional transport layers? For example, let¹s say there existed a way
to push data to the IO appliance cross-memory (guest to ZFZ, ZFS to guest)
"Gary M. Dennis" wrote:
> 1. How does OpenSolaris zfs utilize the storage tier on System z? Are the
> disks allocated to zfs pool(s) simply reserved CMS formatted disks?
Exactly that: minidisks used by OpenSolaris on z port ("sirius" for short)
are CMS-formatted and RESERVE-d minidisks. That wa
On Thu, 2 Apr 2009 09:57:40 -0400, David Boyes
wrote:
>Answered offllist to preserve list purity.
>
>
>On 4/2/09 12:25 AM, "Gary M. Dennis" wrote:
>
>[snip]
>
Can I have a copy? See email address below. (The one I am signed up with
is at my home.)
I don't believe in 'list purity'. I'd pref
I've also seen hardware appliances that will do thin provisioning and
de-duplication for a SAN environment. Basically the idea is you hide
all your real storage behind the appliance and it will present virtual
LUNs to the operating systems and only store a single copy of any
duplicated data in the
Answered offllist to preserve list purity.
On 4/2/09 12:25 AM, "Gary M. Dennis" wrote:
[snip]
David,
Thanks.
1. How does OpenSolaris zfs utilize the storage tier on System z? Are the
disks allocated to zfs pool(s) simply reserved CMS formatted disks?
2. How does the Async I/O in ZFS work? Would the guest that requested the
I/O be signaled with an ext interrupt by the I/O appliance?
On Wednesday, 04/01/2009 at 05:33 EDT, "Gary M. Dennis"
wrote:
> Is there a VM I/O management system available which will:
>
> 1. Support space allocation requests for guests on a sparse basis? The
file
> server needs to make the guest believe it actually has the entire
allocation
> while only
Some of the STK/Sun disks have hardware features to do this. OpenAFS or Lustre
could do this if you allow Linux guests to provide the services, or ZFS if you
allow OpenSolaris guests. It'd be very easy to package up an appliance server
image to do what you need done with either one (Linux or Ope
Is there a VM I/O management system available which will:
1. Support space allocation requests for guests on a sparse basis? The file
server needs to make the guest believe it actually has the entire allocation
while only tying up space in the pool the guest actually used.
2. Support Async I/O re
23 matches
Mail list logo