In your example you called newfs, which creates a new UFS file system.

I think if you don't create a filesystem on it and use it as a raw block
device it should just work.
This is what qemu does when running inside a KVM branded zone. 

Regards 

Jorge 

On 2018-02-27 14:05, Fred Liu wrote:

> Actually,we need access the real hardware not the file system. There is some 
> database software which claims having performance optimization on real 
> hardware. I notice that we can tune zfs rec block size in vm, maybe that is 
> another idea. Anyone tuned? 
> 
> Thanks! 
> 
> Fred 
> 
> On Tue, Feb 27, 2018 at 8:45 PM +0800, "Jorge Schrauwen" 
> <sjorge...@blackdot.be> wrote:
> 
>> Hi,
>> 
>> You probably want to set fs_allowed so it includes the filesystem you 
>> are trying to create on it.
>> 
>> (from man vmadm)
>> fs_allowed:
>> 
>> This option allows you to specify filesystem types this zone 
>> is allowed
>> to mount.  For example on a zone for building SmartOS you 
>> probably want
>> to set this to: "ufs,pcfs,tmpfs".  To unset this property, 
>> set the
>> value to the empty string.
>> 
>> type: string (comma separated list of filesystem types)
>> vmtype: OS
>> listable: no
>> create: yes
>> update: yes (requires zone reboot to take effect)
>> 
>> Regards
>> 
>> Jorge
>> 
>> On 2018-02-27 13:41, Fred Liu wrote:
>>> Hi,
>>> 
>>> For I have no spare hardware(disk), I have tried adding zvol to OS/LX
>>> zone. But it looks likes not working as the document.
>>> zonecfg:f58e8c87-eb04-ea48-bf23-9b7be32515b8:device> set
>>> match=/dev/zvol/rdsk/zones/device
>>> zonecfg:f58e8c87-eb04-ea48-bf23-9b7be32515b8:device> end
>>> zonecfg:f58e8c87-eb04-ea48-bf23-9b7be32515b8> verify
>>> zonecfg:f58e8c87-eb04-ea48-bf23-9b7be32515b8> exit
>>> 
>>> [root@pluto /zones/build]# zlogin f58e8c87-eb04-ea48-bf23-9b7be32515b8
>>> [Connected to zone 'f58e8c87-eb04-ea48-bf23-9b7be32515b8' pts/6]
>>> Last login: Tue Feb 27 20:25:02 on pts/6
>>> __        .                   .
>>> _|  |_      | .-. .  . .-. :--. |-
>>> |_    _|     ;|   ||  |(.-' |  | |
>>> |__|   `--'  `-' `;-| `-' '  ' `-'
>>> /  ; Instance (base-multiarch-lts 15.4.0)
>>> `-'  https://docs.joyent.com/images/smartos/base
>>> 
>>> [root@f58e8c87-eb04-ea48-bf23-9b7be32515b8 ~]# ls -la
>>> /dev/zvol/rdsk/zones/device
>>> crw------- 1 root sys 90, 26 Feb 27 20:35 /dev/zvol/rdsk/zones/device
>>>  [root@f58e8c87-eb04-ea48-bf23-9b7be32515b8 ~]# newfs
>>> /dev/zvol/rdsk/zones/device
>>>  newfs: construct a new file system /dev/zvol/rdsk/zones/device: (y/n)?
>>> y
>>> can't check mount point; can't stat
>>> 
>>> Will the real hardware be different?
>>> 
>>> I also tried adding zvol to zones on solaris11.2. It works well.
>>> device:
>>> match not specified
>>> storage.template:
>>> dev:/dev/zvol/dsk/%{global-rootzpool}/VARSHARE/zones/%{zonename}/disk%{id}
>>> storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/kz01/disk0
>>>     0
>>> bootpri: 0
>>> device:
>>> match not specified
>>> storage: dev:/dev/zvol/dsk/tank/device
>>>     2
>>> bootpri: 2
>>> capped-memory:
>>> physical: 2G
>>> keysource:
>>> raw redacted
>>> zonecfg:kz01>
>>> 
>>> root@kz01:~# format
>>> Searching for disks...done
>>> 
>>> AVAILABLE DISK SELECTIONS:
>>>   . c1d0
>>> /kz-devices@ff/disk@0
>>>     c1d2
>>> /kz-devices@ff/disk@2
>>> Specify disk (enter its number): ^C
>>> root@kz01:~# zpool create test1 c1d2
>>> root@kz01:~# zpool list
>>> NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
>>> rpool  15.9G  5.35G  10.5G  33%  1.00x  ONLINE  -
>>> test1  9.94G   124K  9.94G   0%  1.00x  ONLINE  -
>>> root@kz01:~#
>>> 
>>> Thinking little bit more, for we already can add zvol to kvm zone, it
>>> should not be very hard to realize in OS/LX zone.
>>> 
>>> Thanks.
>>> 
>>> Fred
>>> 



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to