[Veritas-vx] couldn't encapsulate boot disk

2010-10-06 Thread Asiye Yigit
Hello;

 

I have installed SP 5.1RP2 on solaris 10 system.

I am trying to encapsulate the boot disk and after that I will make
mirror.

 

In the vxdisk list,

It shows  

 

DEVICE   TYPEDISK GROUPSTATUS

disk_2   auto--error

disk_3   auto:none   --online invalid

st2540-0_0   auto:none   --online invalid

 

when I try to encapsulate the disk disk_2;

it says

 

Select disk devices to encapsulate:  

[pattern-list,all,list,q,?] disk_2

  Here is the disk selected.  Output format: [Device_Name]

 

  disk_2

 

Continue operation? [y,n,q,?] (default: y) 

  You can choose to add this disk to an existing disk group or to

  a new disk group.  To create a new disk group, select a disk group

  name that does not yet exist.

 

Which disk group [group,list,q,?] rootdg

 

Create a new group named rootdg? [y,n,q,?] (default: y) 

 

Use a default disk name for the disk? [y,n,q,?] (default: y) 

  A new disk group will be created named rootdg and the selected

  disks will be encapsulated and added to this disk group with

  default disk names.

 

  disk_2

 

Continue with operation? [y,n,q,?] (default: y) 

  This disk device is disabled (offline) and cannot be used.

  Output format: [Device_Name]

 

  disk_2

 

Hit RETURN to continue.

 

When I try to make online;

 

It says

 

Select a disk device to enable [disk,list,q,?] disk_2

  VxVM vxdisk ERROR V-5-1-531 Device disk_2: online failed:

Device path not valid

 

Enable another device? [y,n,q,?] (default: n)

 

Is there any idea?

 

Best regards;

 

 

 

 

 

 

___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] couldn't encapsulate boot disk

2010-10-06 Thread Christian Gerbrandt
As you can see, disk_2 is showing in 'error' state.

But it should show as 'online invalid'.

There seems to be an error with the disk.

Check the status of the disk from OS/VxVM and SAN.

 

From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: 06 October 2010 14:55
To: veritas-vx@mailman.eng.auburn.edu
Subject: [Veritas-vx] couldn't encapsulate boot disk

 

Hello;

 

I have installed SP 5.1RP2 on solaris 10 system.

I am trying to encapsulate the boot disk and after that I will make mirror.

 

In the vxdisk list,

It shows  

 

DEVICE   TYPEDISK GROUPSTATUS

disk_2   auto--error

disk_3   auto:none   --online invalid

st2540-0_0   auto:none   --online invalid

 

when I try to encapsulate the disk disk_2;

it says

 

Select disk devices to encapsulate:  

[pattern-list,all,list,q,?] disk_2

  Here is the disk selected.  Output format: [Device_Name]

 

  disk_2

 

Continue operation? [y,n,q,?] (default: y) 

  You can choose to add this disk to an existing disk group or to

  a new disk group.  To create a new disk group, select a disk group

  name that does not yet exist.

 

Which disk group [group,list,q,?] rootdg

 

Create a new group named rootdg? [y,n,q,?] (default: y) 

 

Use a default disk name for the disk? [y,n,q,?] (default: y) 

  A new disk group will be created named rootdg and the selected

  disks will be encapsulated and added to this disk group with

  default disk names.

 

  disk_2

 

Continue with operation? [y,n,q,?] (default: y) 

  This disk device is disabled (offline) and cannot be used.

  Output format: [Device_Name]

 

  disk_2

 

Hit RETURN to continue.

 

When I try to make online;

 

It says

 

Select a disk device to enable [disk,list,q,?] disk_2

  VxVM vxdisk ERROR V-5-1-531 Device disk_2: online failed:

Device path not valid

 

Enable another device? [y,n,q,?] (default: n)

 

Is there any idea?

 

Best regards;

 

 

 

 

 

 

___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] couldn't encapsulate boot disk

2010-10-06 Thread Hudes, Dana
don't do it.
There is no longer, with a Solaris 10 system such as your 5120, a valid reason 
to use VERITAS boot disk encapsulation.
Use ZFS. As I recall the 5120 has hardware mirroring so you could use that or 
you could use ZFS mirroring.  The advantage of hardware mirroring is that it 
doesn't come up to the OS -- but a 5120 has enough CPUs to deal with mirroring. 
Managing the mirror with ZFS gives you more ready access via fmadm to any disk 
errors rather than having them buried behind the raid controller.
Use Solaris 10, preferably update 9, and ZFS for your boot. This is also very 
important for zones and for Live Upgrade.
VERITAS has its advantages in some situations for managing data disks (for 
example, raw volumes and Oracle if you have an ODM license), especially older 
Oracle releases (all of which are certified to work on Solaris 10).  LU will 
make ZFS snapshots and clones if you have a ZFS boot disk. If you have 
VERITAS-encapsulated it will first unencapsulate the boot slice.

VERITAS boot encapsulation also lacks a mirrored dump device: since Vx doesn't 
have the API for dump, you have to give the underlying swap slice. Lose that 
disk lose your dump device.  Vx requires swap is a slice, it's fixed in size 
until you manually go in and grow that slice -- if you left room on your root 
disk to do that operation. ZFS root, by contrast, uses a zvol for dump and a 
zvol for swap. They are sparse devices only using space when needed. Of course 
that means you can fill your entire root disk and leave nothing for dump or 
swap -- so you could also just create them as regular zvols with nailed-up 
space which you can shrink and grow manually as desired without worrying that 
you left room in your disk layout.

boot encapsulation was the thing to do on Solaris 8 and 9. Not 10.



From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: Wednesday, October 06, 2010 10:18 AM
To: Christian Gerbrandt; veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk

Hello;
Disk is okay.
I know it should be online invalid.

For both T5120 systems, the problem is same.
I have many disk from san and two disks internal.
For the boot disk, for both system, it says error state.
Disks are okay physically.
There may be some point patch for SF 5.1RP2 for boot disk mirroring?

From: Christian Gerbrandt [mailto:ger...@gotadsl.co.uk]
Sent: Wednesday, October 06, 2010 5:17 PM
To: Asiye Yigit; veritas-vx@mailman.eng.auburn.edu
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk

As you can see, disk_2 is showing in 'error' state.
But it should show as 'online invalid'.
There seems to be an error with the disk.
Check the status of the disk from OS/VxVM and SAN.

From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: 06 October 2010 14:55
To: veritas-vx@mailman.eng.auburn.edu
Subject: [Veritas-vx] couldn't encapsulate boot disk

Hello;

I have installed SP 5.1RP2 on solaris 10 system.
I am trying to encapsulate the boot disk and after that I will make mirror.

In the vxdisk list,
It shows

DEVICE   TYPEDISK GROUPSTATUS
disk_2   auto--error
disk_3   auto:none   --online invalid
st2540-0_0   auto:none   --online invalid

when I try to encapsulate the disk disk_2;
it says

Select disk devices to encapsulate:
[pattern-list,all,list,q,?] disk_2
  Here is the disk selected.  Output format: [Device_Name]

  disk_2

Continue operation? [y,n,q,?] (default: y)
  You can choose to add this disk to an existing disk group or to
  a new disk group.  To create a new disk group, select a disk group
  name that does not yet exist.

Which disk group [group,list,q,?] rootdg

Create a new group named rootdg? [y,n,q,?] (default: y)

Use a default disk name for the disk? [y,n,q,?] (default: y)
  A new disk group will be created named rootdg and the selected
  disks will be encapsulated and added to this disk group with
  default disk names.

  disk_2

Continue with operation? [y,n,q,?] (default: y)
  This disk device is disabled (offline) and cannot be used.
  Output format: [Device_Name]

  disk_2

Hit RETURN to continue.

When I try to make online;

It says

Select a disk device to enable [disk,list,q,?] disk_2
  VxVM vxdisk ERROR V-5-1-531 Device disk_2: online failed:
Device path not valid

Enable another device? [y,n,q,?] (default: n)

Is there any idea?

Best regards;






___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] couldn't encapsulate boot disk

2010-10-06 Thread DeMontier, Frank
By default, the upgrade changes the naming from osn ( os native ) to ebn
( enclosure based ). Additionally, disk_0 will not necessarily be
c0t0d0s2, it could be c0t1d0s2. Run the following command and see if
this clears up some of the confusion:

 

vxddladm set namingscheme=osn persistence=yes lowercase=yes use_avid=yes

 

Hope this helps. Good luck !



From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye
Yigit
Sent: Wednesday, October 06, 2010 11:29 AM
To: hud...@hra.nyc.gov; ger...@gotadsl.co.uk;
veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk

 

Hello,
I am really aware of zfs and other features. Just test purpose I am
trying to do boot disk encapsulation. I think it is still supported on
this system.



From: Hudes, Dana 
To: Asiye Yigit; Christian Gerbrandt ; veritas-vx@mailman.eng.auburn.edu

Sent: Wed Oct 06 17:45:29 2010
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk 

don't do it.

There is no longer, with a Solaris 10 system such as your 5120, a valid
reason to use VERITAS boot disk encapsulation.

Use ZFS. As I recall the 5120 has hardware mirroring so you could use
that or you could use ZFS mirroring.  The advantage of hardware
mirroring is that it doesn't come up to the OS -- but a 5120 has enough
CPUs to deal with mirroring. Managing the mirror with ZFS gives you more
ready access via fmadm to any disk errors rather than having them buried
behind the raid controller. 

Use Solaris 10, preferably update 9, and ZFS for your boot. This is also
very important for zones and for Live Upgrade.

VERITAS has its advantages in some situations for managing data disks
(for example, raw volumes and Oracle if you have an ODM license),
especially older Oracle releases (all of which are certified to work on
Solaris 10).  LU will make ZFS snapshots and clones if you have a ZFS
boot disk. If you have VERITAS-encapsulated it will first unencapsulate
the boot slice.

 

VERITAS boot encapsulation also lacks a mirrored dump device: since Vx
doesn't have the API for dump, you have to give the underlying swap
slice. Lose that disk lose your dump device.  Vx requires swap is a
slice, it's fixed in size until you manually go in and grow that slice
-- if you left room on your root disk to do that operation. ZFS root, by
contrast, uses a zvol for dump and a zvol for swap. They are sparse
devices only using space when needed. Of course that means you can fill
your entire root disk and leave nothing for dump or swap -- so you could
also just create them as regular zvols with nailed-up space which you
can shrink and grow manually as desired without worrying that you left
room in your disk layout.

 

boot encapsulation was the thing to do on Solaris 8 and 9. Not 10.

 

 





From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye
Yigit
Sent: Wednesday, October 06, 2010 10:18 AM
To: Christian Gerbrandt; veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk

Hello;

Disk is okay.

I know it should be online invalid.

 

For both T5120 systems, the problem is same.

I have many disk from san and two disks internal.

For the boot disk, for both system, it says error state.

Disks are okay physically.

There may be some point patch for SF 5.1RP2 for boot disk
mirroring?

 

From: Christian Gerbrandt [mailto:ger...@gotadsl.co.uk] 
Sent: Wednesday, October 06, 2010 5:17 PM
To: Asiye Yigit; veritas-vx@mailman.eng.auburn.edu
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk

 

As you can see, disk_2 is showing in 'error' state.

But it should show as 'online invalid'.

There seems to be an error with the disk.

Check the status of the disk from OS/VxVM and SAN.

 

From: veritas-vx-boun...@mailman.eng.auburn.edu
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye
Yigit
Sent: 06 October 2010 14:55
To: veritas-vx@mailman.eng.auburn.edu
Subject: [Veritas-vx] couldn't encapsulate boot disk

 

Hello;

 

I have installed SP 5.1RP2 on solaris 10 system.

I am trying to encapsulate the boot disk and after that I will
make mirror.

 

In the vxdisk list,

It shows  

 

DEVICE   TYPEDISK GROUPSTATUS

disk_2   auto--error

disk_3   auto:none   --online
invalid

st2540-0_0   auto:none   --online
invalid

 

when I try 

Re: [Veritas-vx] couldn't encapsulate boot disk

2010-10-06 Thread Hudes, Dana
yes it is still supported. Veritas has to support the same features on any 
platform. that's part of the point of Veritas.
go right ahead and do your experiment. while you're at it you could dig out the 
procedure for a more pure veritas disk.



From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: Wednesday, October 06, 2010 11:29 AM
To: Hudes, Dana; ger...@gotadsl.co.uk; veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk


Hello,
I am really aware of zfs and other features. Just test purpose I am trying to 
do boot disk encapsulation. I think it is still supported on this system.


From: Hudes, Dana
To: Asiye Yigit; Christian Gerbrandt ; veritas-vx@mailman.eng.auburn.edu
Sent: Wed Oct 06 17:45:29 2010
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk

don't do it.
There is no longer, with a Solaris 10 system such as your 5120, a valid reason 
to use VERITAS boot disk encapsulation.
Use ZFS. As I recall the 5120 has hardware mirroring so you could use that or 
you could use ZFS mirroring.  The advantage of hardware mirroring is that it 
doesn't come up to the OS -- but a 5120 has enough CPUs to deal with mirroring. 
Managing the mirror with ZFS gives you more ready access via fmadm to any disk 
errors rather than having them buried behind the raid controller.
Use Solaris 10, preferably update 9, and ZFS for your boot. This is also very 
important for zones and for Live Upgrade.
VERITAS has its advantages in some situations for managing data disks (for 
example, raw volumes and Oracle if you have an ODM license), especially older 
Oracle releases (all of which are certified to work on Solaris 10).  LU will 
make ZFS snapshots and clones if you have a ZFS boot disk. If you have 
VERITAS-encapsulated it will first unencapsulate the boot slice.

VERITAS boot encapsulation also lacks a mirrored dump device: since Vx doesn't 
have the API for dump, you have to give the underlying swap slice. Lose that 
disk lose your dump device.  Vx requires swap is a slice, it's fixed in size 
until you manually go in and grow that slice -- if you left room on your root 
disk to do that operation. ZFS root, by contrast, uses a zvol for dump and a 
zvol for swap. They are sparse devices only using space when needed. Of course 
that means you can fill your entire root disk and leave nothing for dump or 
swap -- so you could also just create them as regular zvols with nailed-up 
space which you can shrink and grow manually as desired without worrying that 
you left room in your disk layout.

boot encapsulation was the thing to do on Solaris 8 and 9. Not 10.



From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: Wednesday, October 06, 2010 10:18 AM
To: Christian Gerbrandt; veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk

Hello;
Disk is okay.
I know it should be online invalid.

For both T5120 systems, the problem is same.
I have many disk from san and two disks internal.
For the boot disk, for both system, it says error state.
Disks are okay physically.
There may be some point patch for SF 5.1RP2 for boot disk mirroring?

From: Christian Gerbrandt [mailto:ger...@gotadsl.co.uk]
Sent: Wednesday, October 06, 2010 5:17 PM
To: Asiye Yigit; veritas-vx@mailman.eng.auburn.edu
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk

As you can see, disk_2 is showing in 'error' state.
But it should show as 'online invalid'.
There seems to be an error with the disk.
Check the status of the disk from OS/VxVM and SAN.

From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: 06 October 2010 14:55
To: veritas-vx@mailman.eng.auburn.edu
Subject: [Veritas-vx] couldn't encapsulate boot disk

Hello;

I have installed SP 5.1RP2 on solaris 10 system.
I am trying to encapsulate the boot disk and after that I will make mirror.

In the vxdisk list,
It shows

DEVICE   TYPEDISK GROUPSTATUS
disk_2   auto--error
disk_3   auto:none   --online invalid
st2540-0_0   auto:none   --online invalid

when I try to encapsulate the disk disk_2;
it says

Select disk devices to encapsulate:
[pattern-list,all,list,q,?] disk_2
  Here is the disk selected.  Output format: [Device_Name]

  disk_2

Continue operation? [y,n,q,?] (default: y)
  You can choose to add this disk to an existing disk group or to
  a new disk group.  To create a new disk group, select a disk group
  name that does not yet exist.

Which disk group [group,list,q,?] rootdg

Create a new group named rootdg? [y,n,q,?] (default: y)

Use a default disk name for the 

Re: [Veritas-vx] couldn't encapsulate boot disk

2010-10-06 Thread William Havey
Since your following the Symantec suggestions, they state the boot disk must


Ø Two free partitions.

Ø 2048 consecutive sectors free.

Ø Enclosure-based names (EBN) has not been implemented (pre 5.1).

Ø Partition 2 spans the whole device with no defined file system.
You've still got to deal with the error state.

Bill

On Wed, Oct 6, 2010 at 11:46 AM, Hudes, Dana hud...@hra.nyc.gov wrote:

  yes it is still supported. Veritas has to support the same features on
 any platform. that's part of the point of Veritas.
 go right ahead and do your experiment. while you're at it you could dig out
 the procedure for a more pure veritas disk.


  --
 *From:* veritas-vx-boun...@mailman.eng.auburn.edu [mailto:
 veritas-vx-boun...@mailman.eng.auburn.edu] *On Behalf Of *Asiye Yigit
 *Sent:* Wednesday, October 06, 2010 11:29 AM
 *To:* Hudes, Dana; ger...@gotadsl.co.uk; veritas-vx@mailman.eng.auburn.edu

 *Subject:* Re: [Veritas-vx] couldn't encapsulate boot disk

  Hello,
 I am really aware of zfs and other features. Just test purpose I am trying
 to do boot disk encapsulation. I think it is still supported on this system.

 --
 *From*: Hudes, Dana
 *To*: Asiye Yigit; Christian Gerbrandt ; veritas-vx@mailman.eng.auburn.edu
 *Sent*: Wed Oct 06 17:45:29 2010
 *Subject*: RE: [Veritas-vx] couldn't encapsulate boot disk

 don't do it.
 There is no longer, with a Solaris 10 system such as your 5120, a valid
 reason to use VERITAS boot disk encapsulation.
 Use ZFS. As I recall the 5120 has hardware mirroring so you could use that
 or you could use ZFS mirroring.  The advantage of hardware mirroring is that
 it doesn't come up to the OS -- but a 5120 has enough CPUs to deal with
 mirroring. Managing the mirror with ZFS gives you more ready access via
 fmadm to any disk errors rather than having them buried behind the raid
 controller.
 Use Solaris 10, preferably update 9, and ZFS for your boot. This is also
 very important for zones and for Live Upgrade.
 VERITAS has its advantages in some situations for managing data disks (for
 example, raw volumes and Oracle if you have an ODM license), especially
 older Oracle releases (all of which are certified to work on Solaris 10).
 LU will make ZFS snapshots and clones if you have a ZFS boot disk. If you
 have VERITAS-encapsulated it will first unencapsulate the boot slice.

 VERITAS boot encapsulation also lacks a mirrored dump device: since Vx
 doesn't have the API for dump, you have to give the underlying swap slice.
 Lose that disk lose your dump device.  Vx requires swap is a slice, it's
 fixed in size until you manually go in and grow that slice -- if you left
 room on your root disk to do that operation. ZFS root, by contrast, uses a
 zvol for dump and a zvol for swap. They are sparse devices only using space
 when needed. Of course that means you can fill your entire root disk and
 leave nothing for dump or swap -- so you could also just create them as
 regular zvols with nailed-up space which you can shrink and grow manually as
 desired without worrying that you left room in your disk layout.

 boot encapsulation was the thing to do on Solaris 8 and 9. Not 10.


  --
 *From:* veritas-vx-boun...@mailman.eng.auburn.edu [mailto:
 veritas-vx-boun...@mailman.eng.auburn.edu] *On Behalf Of *Asiye Yigit
 *Sent:* Wednesday, October 06, 2010 10:18 AM
 *To:* Christian Gerbrandt; veritas-vx@mailman.eng.auburn.edu
 *Subject:* Re: [Veritas-vx] couldn't encapsulate boot disk

  Hello;

 Disk is okay.

 I know it should be online invalid.



 For both T5120 systems, the problem is same.

 I have many disk from san and two disks internal.

 For the boot disk, for both system, it says error state.

 Disks are okay physically.

 There may be some point patch for SF 5.1RP2 for boot disk mirroring?



 *From:* Christian Gerbrandt [mailto:ger...@gotadsl.co.uk]
 *Sent:* Wednesday, October 06, 2010 5:17 PM
 *To:* Asiye Yigit; veritas-vx@mailman.eng.auburn.edu
 *Subject:* RE: [Veritas-vx] couldn't encapsulate boot disk



 As you can see, disk_2 is showing in ‘error’ state.

 But it should show as ‘online invalid’.

 There seems to be an error with the disk.

 Check the status of the disk from OS/VxVM and SAN.



 *From:* veritas-vx-boun...@mailman.eng.auburn.edu [mailto:
 veritas-vx-boun...@mailman.eng.auburn.edu] *On Behalf Of *Asiye Yigit
 *Sent:* 06 October 2010 14:55
 *To:* veritas-vx@mailman.eng.auburn.edu
 *Subject:* [Veritas-vx] couldn't encapsulate boot disk



 Hello;



 I have installed SP 5.1RP2 on solaris 10 system.

 I am trying to encapsulate the boot disk and after that I will make mirror.



 In the vxdisk list,

 It shows



 DEVICE   TYPEDISK GROUPSTATUS

 disk_2   auto--error

 disk_3   auto:none   --online invalid

 st2540-0_0   auto:none   --online invalid



 

Re: [Veritas-vx] couldn't encapsulate boot disk

2010-10-06 Thread Asiye Yigit
Yes,
But it is in error state. I will open a case to symantec support I think.



From: Hudes, Dana 
To: Asiye Yigit; ger...@gotadsl.co.uk ; veritas-vx@mailman.eng.auburn.edu 
Sent: Wed Oct 06 18:46:49 2010
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk 


yes it is still supported. Veritas has to support the same features on any 
platform. that's part of the point of Veritas.
go right ahead and do your experiment. while you're at it you could dig out the 
procedure for a more pure veritas disk. 
 




From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: Wednesday, October 06, 2010 11:29 AM
To: Hudes, Dana; ger...@gotadsl.co.uk; veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk



Hello,
I am really aware of zfs and other features. Just test purpose I am 
trying to do boot disk encapsulation. I think it is still supported on this 
system.




From: Hudes, Dana 
To: Asiye Yigit; Christian Gerbrandt ; 
veritas-vx@mailman.eng.auburn.edu 
Sent: Wed Oct 06 17:45:29 2010
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk 




don't do it.
There is no longer, with a Solaris 10 system such as your 5120, a valid 
reason to use VERITAS boot disk encapsulation.
Use ZFS. As I recall the 5120 has hardware mirroring so you could use 
that or you could use ZFS mirroring.  The advantage of hardware mirroring is 
that it doesn't come up to the OS -- but a 5120 has enough CPUs to deal with 
mirroring. Managing the mirror with ZFS gives you more ready access via fmadm 
to any disk errors rather than having them buried behind the raid controller. 
Use Solaris 10, preferably update 9, and ZFS for your boot. This is 
also very important for zones and for Live Upgrade.
VERITAS has its advantages in some situations for managing data disks 
(for example, raw volumes and Oracle if you have an ODM license), especially 
older Oracle releases (all of which are certified to work on Solaris 10).  LU 
will make ZFS snapshots and clones if you have a ZFS boot disk. If you have 
VERITAS-encapsulated it will first unencapsulate the boot slice.
 
VERITAS boot encapsulation also lacks a mirrored dump device: since Vx 
doesn't have the API for dump, you have to give the underlying swap slice. Lose 
that disk lose your dump device.  Vx requires swap is a slice, it's fixed in 
size until you manually go in and grow that slice -- if you left room on your 
root disk to do that operation. ZFS root, by contrast, uses a zvol for dump and 
a zvol for swap. They are sparse devices only using space when needed. Of 
course that means you can fill your entire root disk and leave nothing for dump 
or swap -- so you could also just create them as regular zvols with nailed-up 
space which you can shrink and grow manually as desired without worrying that 
you left room in your disk layout.
 
boot encapsulation was the thing to do on Solaris 8 and 9. Not 10.
 




From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: Wednesday, October 06, 2010 10:18 AM
To: Christian Gerbrandt; veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] couldn't encapsulate boot disk



Hello;

Disk is okay.

I know it should be online invalid.

 

For both T5120 systems, the problem is same.

I have many disk from san and two disks internal.

For the boot disk, for both system, it says error state.

Disks are okay physically.

There may be some point patch for SF 5.1RP2 for boot disk 
mirroring?

 

From: Christian Gerbrandt [mailto:ger...@gotadsl.co.uk] 
Sent: Wednesday, October 06, 2010 5:17 PM
To: Asiye Yigit; veritas-vx@mailman.eng.auburn.edu
Subject: RE: [Veritas-vx] couldn't encapsulate boot disk

 

As you can see, disk_2 is showing in ‘error’ state.

But it should show as ‘online invalid’.

There seems to be an error with the disk.

Check the status of the disk from OS/VxVM and SAN.

 

From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Asiye Yigit
Sent: 06 October 2010 14:55
To: veritas-vx@mailman.eng.auburn.edu
 

Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

2010-10-06 Thread Venkata Sreenivasa Rao Nagineni
Hi Sebastien,

In the first mail you mentioned that you are using mpxio to control the XP24K 
array. Why are you using mpxio here?

Thanks,
Venkata Sreenivasarao Nagineni,
Symantec

 -Original Message-
 From: veritas-vx-boun...@mailman.eng.auburn.edu [mailto:veritas-vx-
 boun...@mailman.eng.auburn.edu] On Behalf Of Sebastien DAUBIGNE
 Sent: Wednesday, October 06, 2010 9:32 AM
 To: undisclosed-recipients
 Cc: Veritas-vx@mailman.eng.auburn.edu
 Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue
 
   Hi,
 
 I come back with my dmp_fast_recovery issue (VxDMP fails the path
 before
 MPxIO gets a chance to failover on alternate path).
 As stated previously, I am running 5.0GA, and this tunable is not
 supported in this release. However I still don't know if VxVM 5.0GA
 silently bypasses the MPxIO stack for error recovery.
 
 Now I try to determine if upgrading to MP3 will resolve this issue
 (which rarely occured).
 
 Could anyone (maybe Joshua ?) explain if the behaviour of 5.0GA without
 tunable  is functionally identical to dmp_fast_recovery=0 or
 dmp_fast_recovery=1 ? Maybe the mechanism has been implemented in 5.0
 without the option to disable it (this could explain my issue) ?
 
 Joshua, you mentioned another tuneable for 5.0 but looking at the list
 I
 can't identify the corresponding tunable :
 
   vxdmpadm gettune all
  Tunable   Current Value  Default Value
 ---  -
 dmp_failed_io_threshold   5760057600
 dmp_retry_count   55
 dmp_pathswitch_blks_shift11   11
 dmp_queue_depth  32   32
 dmp_cache_open   on   on
 dmp_daemon_count 10   10
 dmp_scsi_timeout 30   30
 dmp_delayq_interval  15   15
 dmp_path_age  0  300
 dmp_stat_interval 11
 dmp_health_time   0   60
 dmp_probe_idle_lun   on   on
 dmp_log_level 41
 
 Cheers.
 
 
 
 Le 16/09/2010 16:50, Joshua Fielden a écrit :
  dmp_fast_recovery is a mechanism by which we bypass the sd/scsi stack
 and send path inquiry/status CDBs directly from the HBA in order to
 bypass long SCSI queues and recover paths faster. With a TPD (third-
 party driver) such as MPxIO, bypassing the stack means we bypass the
 TPD completely, and interactions such as this can happen. The vxesd
 (event-source daemon) is another 5.0/MP2 backport addition that's moot
 in the presence of a TPD.
 
   From your modinfo, you're not actually running MP3. This technote
 (http://seer.entsupport.symantec.com/docs/327057.htm) isn't exactly
 your scenario, but looking for partially-installed pkgs is a good start
 to getting your server correctly installed, then the tuneable should
 work -- very early 5.0 versions had a differently-named tuneable I
 can't find in my mail archive ATM.
 
  Cheers,
 
  Jf
 
  -Original Message-
  From: veritas-vx-boun...@mailman.eng.auburn.edu [mailto:veritas-vx-
 boun...@mailman.eng.auburn.edu] On Behalf Of Sebastien DAUBIGNE
  Sent: Thursday, September 16, 2010 7:41 AM
  To: Veritas-vx@mailman.eng.auburn.edu
  Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue
 
 Thank you Victor and William, it seems to be a very good lead.
 
  Unfortunately, this tunable seems not to be supported in the VxVM
  version installed on my system :
 
  vxdmpadm gettune dmp_fast_recovery
  VxVM vxdmpadm ERROR V-5-1-12015  Incorrect tunable
  vxdmpadm gettune [tunable name]
  Note - Tunable name can be dmp_failed_io_threshold, dmp_retry_count,
  dmp_pathswitch_blks_shift, dmp_queue_depth, dmp_cache_open,
  dmp_daemon_count, dmp_scsi_timeout, dmp_delayq_interval,
 dmp_path_age,
  or dmp_stat_interval
 
  Something odd because my version is 5.0 MP3 Solaris SPARC, and
 according
  to http://seer.entsupport.symantec.com/docs/316981.htm this tunable
  should be available.
 
  modinfo | grep -i vx
 38 7846a000  3800e 288   1  vxdmp (VxVM 5.0-2006-05-11a: DMP
 Drive)
 40 784a4000 334c40 289   1  vxio (VxVM 5.0-2006-05-11a I/O driver)
 42 783ec71ddf8 290   1  vxspec (VxVM 5.0-2006-05-11a
 control/st)
  296 78cfb0a2c6b 291   1  vxportal (VxFS 5.0_REV-5.0A55_sol portal
 )
  297 78d6c000 1b9d4f   8   1  vxfs (VxFS 5.0_REV-5.0A55_sol SunOS 5)
  298 78f18000   a270 292   1  fdd (VxQIO 5.0_REV-5.0A55_sol Quick )
 
 
 
 
 
  Le 16/09/2010 12:15, Victor Engle a écrit :
  Which version of veritas? Version 4/2MP2 and version 5.x introduced
 a
  feature called DMP fast recovery. It was probably supposed to be
  called DMP fast fail but recovery sounds better. It is supposed to
  fail suspect 

Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

2010-10-06 Thread Ashish Yajnik
MPxIO with VxVM is only supported with Sun storage. If you run into problems 
with MPxIO and SF on XP24K then support will not be able to help you. I would 
recommend using DMP with XP24K.

Ashish
--
Sent using BlackBerry


- Original Message -
From: veritas-vx-boun...@mailman.eng.auburn.edu 
veritas-vx-boun...@mailman.eng.auburn.edu
To: Sebastien DAUBIGNE sebastien.daubi...@atosorigin.com; 
undisclosed-recipients undisclosed-recipients:;@mailman.eng.auburn.edu
Cc: Veritas-vx@mailman.eng.auburn.edu Veritas-vx@mailman.eng.auburn.edu
Sent: Wed Oct 06 10:08:08 2010
Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

Hi Sebastien,

In the first mail you mentioned that you are using mpxio to control the XP24K 
array. Why are you using mpxio here?

Thanks,
Venkata Sreenivasarao Nagineni,
Symantec

 -Original Message-
 From: veritas-vx-boun...@mailman.eng.auburn.edu [mailto:veritas-vx-
 boun...@mailman.eng.auburn.edu] On Behalf Of Sebastien DAUBIGNE
 Sent: Wednesday, October 06, 2010 9:32 AM
 To: undisclosed-recipients
 Cc: Veritas-vx@mailman.eng.auburn.edu
 Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue
 
   Hi,
 
 I come back with my dmp_fast_recovery issue (VxDMP fails the path
 before
 MPxIO gets a chance to failover on alternate path).
 As stated previously, I am running 5.0GA, and this tunable is not
 supported in this release. However I still don't know if VxVM 5.0GA
 silently bypasses the MPxIO stack for error recovery.
 
 Now I try to determine if upgrading to MP3 will resolve this issue
 (which rarely occured).
 
 Could anyone (maybe Joshua ?) explain if the behaviour of 5.0GA without
 tunable  is functionally identical to dmp_fast_recovery=0 or
 dmp_fast_recovery=1 ? Maybe the mechanism has been implemented in 5.0
 without the option to disable it (this could explain my issue) ?
 
 Joshua, you mentioned another tuneable for 5.0 but looking at the list
 I
 can't identify the corresponding tunable :
 
   vxdmpadm gettune all
  Tunable   Current Value  Default Value
 ---  -
 dmp_failed_io_threshold   5760057600
 dmp_retry_count   55
 dmp_pathswitch_blks_shift11   11
 dmp_queue_depth  32   32
 dmp_cache_open   on   on
 dmp_daemon_count 10   10
 dmp_scsi_timeout 30   30
 dmp_delayq_interval  15   15
 dmp_path_age  0  300
 dmp_stat_interval 11
 dmp_health_time   0   60
 dmp_probe_idle_lun   on   on
 dmp_log_level 41
 
 Cheers.
 
 
 
 Le 16/09/2010 16:50, Joshua Fielden a écrit :
  dmp_fast_recovery is a mechanism by which we bypass the sd/scsi stack
 and send path inquiry/status CDBs directly from the HBA in order to
 bypass long SCSI queues and recover paths faster. With a TPD (third-
 party driver) such as MPxIO, bypassing the stack means we bypass the
 TPD completely, and interactions such as this can happen. The vxesd
 (event-source daemon) is another 5.0/MP2 backport addition that's moot
 in the presence of a TPD.
 
   From your modinfo, you're not actually running MP3. This technote
 (http://seer.entsupport.symantec.com/docs/327057.htm) isn't exactly
 your scenario, but looking for partially-installed pkgs is a good start
 to getting your server correctly installed, then the tuneable should
 work -- very early 5.0 versions had a differently-named tuneable I
 can't find in my mail archive ATM.
 
  Cheers,
 
  Jf
 
  -Original Message-
  From: veritas-vx-boun...@mailman.eng.auburn.edu [mailto:veritas-vx-
 boun...@mailman.eng.auburn.edu] On Behalf Of Sebastien DAUBIGNE
  Sent: Thursday, September 16, 2010 7:41 AM
  To: Veritas-vx@mailman.eng.auburn.edu
  Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue
 
 Thank you Victor and William, it seems to be a very good lead.
 
  Unfortunately, this tunable seems not to be supported in the VxVM
  version installed on my system :
 
  vxdmpadm gettune dmp_fast_recovery
  VxVM vxdmpadm ERROR V-5-1-12015  Incorrect tunable
  vxdmpadm gettune [tunable name]
  Note - Tunable name can be dmp_failed_io_threshold, dmp_retry_count,
  dmp_pathswitch_blks_shift, dmp_queue_depth, dmp_cache_open,
  dmp_daemon_count, dmp_scsi_timeout, dmp_delayq_interval,
 dmp_path_age,
  or dmp_stat_interval
 
  Something odd because my version is 5.0 MP3 Solaris SPARC, and
 according
  to http://seer.entsupport.symantec.com/docs/316981.htm this tunable
  should be available.
 
  modinfo | grep -i vx
 38 7846a000  3800e 288   1  vxdmp (VxVM 

Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

2010-10-06 Thread Victor Engle
This is absolutely false!

MPxIO is an excellent multipathing solution and is supported by all
major storage vendors including HP. This issue discussed in this
thread has to do with improper behavior of DMP when multipathing is
managed by a native layer like MPxIO.

Storage and OS vendors have no motivation to lock you into a veritas solution.

Or, Ashish, are you saying that your Symantec is locking Symantec
customers into DMP? Hitachi, EMC, NetApp and HP all have supported
configurations which include vxvm and native OS multipathing stacks.

Thanks,
Vic


On Wed, Oct 6, 2010 at 1:26 PM, Ashish Yajnik
ashish_yaj...@symantec.com wrote:
 MPxIO with VxVM is only supported with Sun storage. If you run into problems 
 with MPxIO and SF on XP24K then support will not be able to help you. I would 
 recommend using DMP with XP24K.

 Ashish
 --
 Sent using BlackBerry


 - Original Message -
 From: veritas-vx-boun...@mailman.eng.auburn.edu 
 veritas-vx-boun...@mailman.eng.auburn.edu
 To: Sebastien DAUBIGNE sebastien.daubi...@atosorigin.com; 
 undisclosed-recipients undisclosed-recipients:;@mailman.eng.auburn.edu
 Cc: Veritas-vx@mailman.eng.auburn.edu Veritas-vx@mailman.eng.auburn.edu
 Sent: Wed Oct 06 10:08:08 2010
 Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

 Hi Sebastien,

 In the first mail you mentioned that you are using mpxio to control the XP24K 
 array. Why are you using mpxio here?

 Thanks,
 Venkata Sreenivasarao Nagineni,
 Symantec

 -Original Message-
 From: veritas-vx-boun...@mailman.eng.auburn.edu [mailto:veritas-vx-
 boun...@mailman.eng.auburn.edu] On Behalf Of Sebastien DAUBIGNE
 Sent: Wednesday, October 06, 2010 9:32 AM
 To: undisclosed-recipients
 Cc: Veritas-vx@mailman.eng.auburn.edu
 Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

   Hi,

 I come back with my dmp_fast_recovery issue (VxDMP fails the path
 before
 MPxIO gets a chance to failover on alternate path).
 As stated previously, I am running 5.0GA, and this tunable is not
 supported in this release. However I still don't know if VxVM 5.0GA
 silently bypasses the MPxIO stack for error recovery.

 Now I try to determine if upgrading to MP3 will resolve this issue
 (which rarely occured).

 Could anyone (maybe Joshua ?) explain if the behaviour of 5.0GA without
 tunable  is functionally identical to dmp_fast_recovery=0 or
 dmp_fast_recovery=1 ? Maybe the mechanism has been implemented in 5.0
 without the option to disable it (this could explain my issue) ?

 Joshua, you mentioned another tuneable for 5.0 but looking at the list
 I
 can't identify the corresponding tunable :

   vxdmpadm gettune all
              Tunable               Current Value  Default Value
 --    -  -
 dmp_failed_io_threshold               57600            57600
 dmp_retry_count                           5                5
 dmp_pathswitch_blks_shift                11               11
 dmp_queue_depth                          32               32
 dmp_cache_open                           on               on
 dmp_daemon_count                         10               10
 dmp_scsi_timeout                         30               30
 dmp_delayq_interval                      15               15
 dmp_path_age                              0              300
 dmp_stat_interval                         1                1
 dmp_health_time                           0               60
 dmp_probe_idle_lun                       on               on
 dmp_log_level                             4                1

 Cheers.



 Le 16/09/2010 16:50, Joshua Fielden a écrit :
  dmp_fast_recovery is a mechanism by which we bypass the sd/scsi stack
 and send path inquiry/status CDBs directly from the HBA in order to
 bypass long SCSI queues and recover paths faster. With a TPD (third-
 party driver) such as MPxIO, bypassing the stack means we bypass the
 TPD completely, and interactions such as this can happen. The vxesd
 (event-source daemon) is another 5.0/MP2 backport addition that's moot
 in the presence of a TPD.
 
   From your modinfo, you're not actually running MP3. This technote
 (http://seer.entsupport.symantec.com/docs/327057.htm) isn't exactly
 your scenario, but looking for partially-installed pkgs is a good start
 to getting your server correctly installed, then the tuneable should
 work -- very early 5.0 versions had a differently-named tuneable I
 can't find in my mail archive ATM.
 
  Cheers,
 
  Jf
 
  -Original Message-
  From: veritas-vx-boun...@mailman.eng.auburn.edu [mailto:veritas-vx-
 boun...@mailman.eng.auburn.edu] On Behalf Of Sebastien DAUBIGNE
  Sent: Thursday, September 16, 2010 7:41 AM
  To: Veritas-vx@mailman.eng.auburn.edu
  Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue
 
     Thank you Victor and William, it seems to be a very good lead.
 
  Unfortunately, this tunable seems not to be supported in 

Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

2010-10-06 Thread Christian Gerbrandt
We support several 3rd party multipathing solutions, like MPxIO or EMCs 
PowerPath.
However, MPxIO is only supported on Sun branded Storages.
DMP has also been known to outperform other solutions in certain configurations.

When a 3rd party multipathing is in use, DMP will fail back into TPD mode 
(Third Party Driver), and let the underlaying multipathing do its job.
That's when you see just a single disk in VxVM, when you know you have more 
than one path per disk.

I would recommend to install the 5.0 MP3 RP4 patch, and then check again if 
MPxIO is still misbehaving.
Or ideally, switch over to DMP.  

-Original Message-
From: veritas-vx-boun...@mailman.eng.auburn.edu 
[mailto:veritas-vx-boun...@mailman.eng.auburn.edu] On Behalf Of Victor Engle
Sent: 06 October 2010 20:48
To: Ashish Yajnik
Cc: sebastien.daubi...@atosorigin.com; undisclosed-recipients:, 
@mailman.eng.auburn.edu; Veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

This is absolutely false!

MPxIO is an excellent multipathing solution and is supported by all major 
storage vendors including HP. This issue discussed in this thread has to do 
with improper behavior of DMP when multipathing is managed by a native layer 
like MPxIO.

Storage and OS vendors have no motivation to lock you into a veritas solution.

Or, Ashish, are you saying that your Symantec is locking Symantec customers 
into DMP? Hitachi, EMC, NetApp and HP all have supported configurations which 
include vxvm and native OS multipathing stacks.

Thanks,
Vic


On Wed, Oct 6, 2010 at 1:26 PM, Ashish Yajnik ashish_yaj...@symantec.com 
wrote:
 MPxIO with VxVM is only supported with Sun storage. If you run into problems 
 with MPxIO and SF on XP24K then support will not be able to help you. I would 
 recommend using DMP with XP24K.

 Ashish
 --
 Sent using BlackBerry


 - Original Message -
 From: veritas-vx-boun...@mailman.eng.auburn.edu 
 veritas-vx-boun...@mailman.eng.auburn.edu
 To: Sebastien DAUBIGNE sebastien.daubi...@atosorigin.com; 
 undisclosed-recipients 
 undisclosed-recipients:;@mailman.eng.auburn.edu
 Cc: Veritas-vx@mailman.eng.auburn.edu 
 Veritas-vx@mailman.eng.auburn.edu
 Sent: Wed Oct 06 10:08:08 2010
 Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

 Hi Sebastien,

 In the first mail you mentioned that you are using mpxio to control the XP24K 
 array. Why are you using mpxio here?

 Thanks,
 Venkata Sreenivasarao Nagineni,
 Symantec

 -Original Message-
 From: veritas-vx-boun...@mailman.eng.auburn.edu [mailto:veritas-vx- 
 boun...@mailman.eng.auburn.edu] On Behalf Of Sebastien DAUBIGNE
 Sent: Wednesday, October 06, 2010 9:32 AM
 To: undisclosed-recipients
 Cc: Veritas-vx@mailman.eng.auburn.edu
 Subject: Re: [Veritas-vx] Solaris-SFS / MPxIO / VxVM failover issue

   Hi,

 I come back with my dmp_fast_recovery issue (VxDMP fails the path 
 before MPxIO gets a chance to failover on alternate path).
 As stated previously, I am running 5.0GA, and this tunable is not 
 supported in this release. However I still don't know if VxVM 5.0GA 
 silently bypasses the MPxIO stack for error recovery.

 Now I try to determine if upgrading to MP3 will resolve this issue 
 (which rarely occured).

 Could anyone (maybe Joshua ?) explain if the behaviour of 5.0GA 
 without tunable  is functionally identical to dmp_fast_recovery=0 or
 dmp_fast_recovery=1 ? Maybe the mechanism has been implemented in 5.0 
 without the option to disable it (this could explain my issue) ?

 Joshua, you mentioned another tuneable for 5.0 but looking at the 
 list I can't identify the corresponding tunable :

   vxdmpadm gettune all
  Tunable   Current Value  Default Value
 ---  - 
 dmp_failed_io_threshold   5760057600 
 dmp_retry_count   55 
 dmp_pathswitch_blks_shift11   11 
 dmp_queue_depth  32   32 
 dmp_cache_open   on   on 
 dmp_daemon_count 10   10 
 dmp_scsi_timeout 30   30 
 dmp_delayq_interval  15   15 
 dmp_path_age  0  300 
 dmp_stat_interval 11 
 dmp_health_time   0   60 
 dmp_probe_idle_lun   on   on 
 dmp_log_level 41

 Cheers.



 Le 16/09/2010 16:50, Joshua Fielden a écrit :
  dmp_fast_recovery is a mechanism by which we bypass the sd/scsi 
  stack
 and send path inquiry/status CDBs directly from the HBA in order to 
 bypass long SCSI queues and recover paths faster. With a TPD (third- 
 party driver) such as MPxIO, bypassing the stack means