is this a bug or result of wrong configuration, to me it seems like a bug.
i notice that one of my FS set on top a pool (acipool) and that has been
grown instead of base pool.
Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/napp-it-0.9b3 27G 19G 8.3G 70% /
i have installed pkg called gcc47
i set the path to
PATH=/opt/gcc-4.7.2/bin/:$PATH
when i am trying to run the ./configuration it is giving me error like this.
root@omni:/tmp/bwm-ng-0.6# ./configure
checking for a BSD-compatible install... /usr/gnu/bin/install -c
checking whether build
]: /lib/cpp: not found [No such file or directory]
configure:3404: $? = 127
configure: failed program was:
| /* confdefs.h. */
Thanks,
Myk
On Thu, Sep 26, 2013 at 5:41 PM, Theo Schlossnagle je...@omniti.com wrote:
On Thu, Sep 26, 2013 at 8:25 AM, Muhammad Yousuf Khan sir...@gmail.comwrote
the physical size 320+500+250+250 = 1320G and zpool list
showing 928G.
O_o any help will be highly appreciated
Thanks,
On Fri, Sep 27, 2013 at 12:03 AM, Richard Elling
richard.ell...@richardelling.com wrote:
On Sep 26, 2013, at 2:41 AM, Muhammad Yousuf Khan sir...@gmail.com
wrote:
is this a bug
Raidz2 size is not expanding to the lowest attached drives. here is what i
mean.
first i had . 2x160GB drives and 2x250 drives and per the rule i have got
300GB of working space as Raidz2 consider smallest hard drives and demote
the bigger one.
now i replaced 2x160 with a 500GB and a 320GB
Sorry forgot to add the list
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
just playing with ZFS for learning and i have couple of Iscsi and NFS
enabled on zfs pool named acipool.
when ever i tried to export the pool for testing purposes it says. pool is
bussy
actually i am testing multiple scenarios in one go.
i have replace the drives with bigger ones and want to
root sys 0 Sep 23 20:03 cmdnfs
drwxr-xr-x 3 root sys 0 Sep 23 12:14 iscsi
is there anything that i can do. i have invested a whole day in searching
but found no luck. i am very new to omniOS. please help.
thanks
,
On Mon, Sep 23, 2013 at 7:00 PM, Muhammad Yousuf Khan sir...@gmail.comwrote:
nope
Delete the LUs?
URL:
http://thread.gmane.org/gmane.os.solaris.opensolaris.zfs/51025/focus=51029
Thanks for your i would have also notices that i have created LU via
napp-it but i fear if i delete the LU, will it destroy the data too?
because i do not want to loose the iSCSI data.
that it took 3 to 4 minutes to copy
that huge data.
it seems fine to me. same RAIDz2 but protocol change (SCP on SSH) so i
think the issue is with NFS. not with the Hardware speed/specs limitation.
any idea please.
Thanks,
Myk
On Thu, Sep 12, 2013 at 2:40 AM, Muhammad Yousuf Khan sir
http://docs.oracle.com/cd/E23824_01/html/821-1459/fnnop.html#fnnoq
i m using this link as a reference, and i ran these commands.
zfs create -o mountpoint=none acipool/cmdiscsi
zfs create -V 8G -s acipool/cmdiscsi/iTarget
sbdadm create-lu /dev/zvol/rdsk/acipool/cmdiscsi/iTarget
stmfadm
i am using Raidz2 with 4 drives.
i am using dell 490 with 2x3.0 Xeon processors.
12 GB RAM.
1 40gb HD for omni OS
4 different size disks for Raidz2. set of 2x160gb and 2x250 gb.
every thing is working well except VM file transfer performance. hosted on
NFS.
when i SCP from omniOS to another
. Henson hen...@acm.org wrote:
On Fri, Sep 06, 2013 at 07:36:49PM +0500, Muhammad Yousuf Khan wrote:
root@omni:/# nano
-bash: nano: command not found
What does 'pkg search nano' say?
For example:
# pkg search zoneadm
INDEX ACTION VALUEPACKAGE
basename file usr/sbin
KVM : insufficent hardware support (lacking ept)
the system is workstation 490 Dell precision and i have been using it with
debian visualization for KVM for years with no issue. it is VT enable.
Second message is :
build_devlink_list: readlink failed for /dev/zcons/..
when
14 matches
Mail list logo