[ovirt-users] iSCSI Storage Domain Issues - Please Help

2023-12-20 Thread Matthew J Black
Hi Guys & Gals,

So I've been researching this issue online for a couple of days now and I can't 
seem to find a solution - so I'm hoping you kind people here can help.

We're running oVirt (on Rocky 8, at the moment) with a iSCSI back-end provided 
by Ceph (Quincy, for the record). Everything from the Ceph-end looks AOK.

However, none of the oVirt Hosts (and therefore the VMs) can connect to the 
Ceph iSCSI RBD Images (oVirt Storage Domains), and only one or two of the Hosts 
can log into the Ceph iSCSI Target - the others throw a "Failed to setup iSCSI 
subsystem" error.

All of the existing iSCSI Storage Domains are in Maintenance mode, and when I 
try to do *anything* to them the logs spit out a "Storage domain does not 
exist:" message.

I also cannot create a new iSCSI Storage Domain for a new Ceph pool - again, 
oVirt simply won't/can't see it, even though it clearly visable in the iSCSI 
section of the Ceph Dashboard (and in gwcli on the Ceph Nodes).

All of this started happening after I ran an update of oVirt - including an 
"engine setup" with a full engine-vacuum. Nothing has changed on the Ceph-end.

So I'm looking for help on 2 issues, which or may not be related:

1) Is there a way to "force" oVirt Hosts to log into iSCSI targets? This will 
mean all of the oVirt Hosts will be connected to all of the Ceph iSCSI Gateways.

2) I'm thinking that *somehow* the existing "Storage Domains" registered in 
oVirt have become orphaned, so they need to be "cleaned up" - is there a cli 
way to do this (I don't mind digging into SQL as I'm an old SQL engineer/admin 
from way back). Thoughts on how to do this - and if it should be done at all?

Or should I simple detach the Storage Domain images from the relevant VMs and 
destroy the Storage Domains, recreate them (once I can get the oVirt Hosts to 
log back into the Ceph iSCSI Gateways), and then reattach the relevant images 
to the relevant VMs? I mean, after all, the data is good and available on the 
Ceph SAN, so there is only a little risk (as far as I can see) - but there are 
a *hell* of a lot of VMs to do to do this  :-)

Anyway, any and all help, suggestions, gotchas, etc, etc, etc, are welcome.  :-)

Cheers

Dulux-Oz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MG7DCZZOWX7M6UAHXZ6L3V2MYF67RIF5/


[ovirt-users] iSCSI storage domain with no switch

2019-10-10 Thread MIMMIK _
Is it possibile to have a storage domain on an iSCSI LUN if the storage is 
connected to the cluster physical nodes with no switch and only direct 
connections?

This is the scenario:
- we have two physical nodes with 2 iSCSI ports each.
- the storage has 2 controllers, 2 iSCSI ports on each controller
- each server is directly connected with a port to a storage port on a 
controller and to another storage port on the other controller.

Do you think is this possibile, maybe using any trick or workaround? Thanks.

Regards
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDRWGGSOBU3RSS543RVIPYW2HRTAMXUZ/


Re: [ovirt-users] iSCSI storage domain and multipath when adding node

2017-06-27 Thread Gianluca Cecchi
On Mon, Apr 10, 2017 at 3:06 PM, Gianluca Cecchi 
wrote:

>
>
> On Mon, Apr 10, 2017 at 2:44 PM, Ondrej Svoboda 
> wrote:
>
>> Yes, this is what struck me about your situation. Will you be able to
>> find relevant logs regarding multipath configuration, in which we would see
>> when (or even why) the third connection was created on the first node, and
>> only one connection on the second?
>>
>> On Mon, Apr 10, 2017 at 2:17 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Mon, Apr 10, 2017 at 2:12 PM, Ondrej Svoboda 
>>> wrote:
>>>
 Gianluca,

 I can see that the workaround you describe here (to complete multipath
 configuration in CLI) fixes an inconsistency in observed iSCSI sessions. I
 think it is a shortcoming in oVirt that you had to resort to manual
 configuration. Could you file a bug about this? Ideally, following the bug
 template presented to you by Bugzilla, i.e. "Expected: two iSCSI sessions",
 "Got: one the first node ... one the second node".

 Edy, Martin, do you think you could help out here?

 Thanks,
 Ondra

>>>
>>> Ok, this evening I'm going to open a bugzilla for that.
>>> Please keep in mind that on the already configured node (where before
>>> node addition there were two connections in place with multipath), actually
>>> the node addition generates a third connection, added to the existing two,
>>> using "default" as iSCSI interface (clearly seen if I run "iscsiadm -m
>>> session -P1") 
>>>
>>> Gianluca
>>>
>>>
>>
>>
> vdsm log of the already configured host is here for that day:
> https://drive.google.com/file/d/0BwoPbcrMv8mvQzdCUmtIT1NOT2c/
> view?usp=sharing
>
> Installation / configuration of the second node happened between 11:30 AM
> and 01:30 PM of 6th of April.
>
> Aound 12:29 you will find:
>
> 2017-04-06 12:29:05,832+0200 INFO  (jsonrpc/7) [dispatcher] Run and
> protect: getVGInfo, Return response: {'info': {'state': 'OK', 'vgsize':
> '1099108974592', 'name': '5ed04196-87f1-480e-9fee-9dd450a3b53b',
> 'vgfree': '182536110080', 'vgUUID': 'rIENae-3NLj-o4t8-GVuJ-ZKKb-ksTk-qBkMrE',
> 'pvlist': [{'vendorID': 'EQLOGIC', 'capacity': '1099108974592', 'fwrev':
> '', 'pe_alloc_count': '6829', 'vgUUID': 
> 'rIENae-3NLj-o4t8-GVuJ-ZKKb-ksTk-qBkMrE',
> 'pathlist': [{'connection': '10.10.100.9', 'iqn':
> 'iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910',
> 'portal': '1', 'port': '3260', 'initiatorname': 'p1p1.100'}, {'connection':
> '10.10.100.9', 'iqn': 
> 'iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910',
> 'portal': '1', 'port': '3260', 'initiatorname': 'p1p2'}, {'connection':
> '10.10.100.9', 'iqn': 
> 'iqn.2001-05.com.equallogic:4-771816-e5d0dfb59-1c9b240297958d53-ovsd3910',
> 'portal': '1', 'port': '3260', 'initiatorname': 'default'}], 'pe_count':
> '8189', 'discard_max_bytes': 15728640, 'pathstatus': [{'type': 'iSCSI',
> 'physdev': 'sde', 'capacity': '1099526307840', 'state': 'active', 'lun':
> '0'}, {'type': 'iSCSI', 'physdev': 'sdf', 'capacity': '1099526307840',
> 'state': 'active', 'lun': '0'}, {'type': 'iSCSI', 'physdev': 'sdg',
> 'capacity': '1099526307840', 'state': 'active', 'lun': '0'}], 'devtype':
> 'iSCSI', 'discard_zeroes_data': 1, 'pvUUID': 
> 'g9pjI0-oifQ-kz2O-0Afy-xdnx-THYD-eTWgqB',
> 'serial': 'SEQLOGIC_100E-00_64817197B5DFD0E5538D959702249B1C', 'GUID': '
> 364817197b5dfd0e5538d959702249b1c', 'devcapacity': '1099526307840',
> 'productID': '100E-00'}], 'type': 3, 'attr': {'allocation': 'n', 'partial':
> '-', 'exported': '-', 'permission': 'w', 'clustered': '-', 'resizeable':
> 'z'}}} (logUtils:54)
>
> and around 12:39 you will find
>
> 2017-04-06 12:39:11,003+0200 ERROR (check/loop) [storage.Monitor] Error
> checking path /dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata
> (monitor:485)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/monitor.py", line 483, in _pathChecked
> delay = result.delay()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/check.py", line
> 368, in delay
> raise exception.MiscFileReadException(self.path, self.rc, self.err)
> MiscFileReadException: Internal file read failure:
> ('/dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata', 1,
> bytearray(b"/usr/bin/dd: error reading 
> \'/dev/5ed04196-87f1-480e-9fee-9dd450a3b53b/metadata\':
> Input/output error\n0+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.000234164 s, 0.0 kB/s\n"))
> 2017-04-06 12:39:11,020+0200 INFO  (check/loop) [storage.Monitor] Domain
> 5ed04196-87f1-480e-9fee-9dd450a3b53b became INVALID (monitor:456)
>
> that I think corresponds to the moment when I executed "iscsiadm -m
> session -u" and had the automaic remediation of the correctly defined paths
>
> Gianluca
>

So I come back here because I have an "orthogonal" action with the same
effect.

I have already in place the same 2 oVirt hosts using one 4Tb iSCSI lun.
With the 

Re: [ovirt-users] iSCSI storage domain and multipath when adding node

2017-04-10 Thread Ondrej Svoboda
Yes, this is what struck me about your situation. Will you be able to find
relevant logs regarding multipath configuration, in which we would see when
(or even why) the third connection was created on the first node, and only
one connection on the second?

On Mon, Apr 10, 2017 at 2:17 PM, Gianluca Cecchi 
wrote:

> On Mon, Apr 10, 2017 at 2:12 PM, Ondrej Svoboda 
> wrote:
>
>> Gianluca,
>>
>> I can see that the workaround you describe here (to complete multipath
>> configuration in CLI) fixes an inconsistency in observed iSCSI sessions. I
>> think it is a shortcoming in oVirt that you had to resort to manual
>> configuration. Could you file a bug about this? Ideally, following the bug
>> template presented to you by Bugzilla, i.e. "Expected: two iSCSI sessions",
>> "Got: one the first node ... one the second node".
>>
>> Edy, Martin, do you think you could help out here?
>>
>> Thanks,
>> Ondra
>>
>
> Ok, this evening I'm going to open a bugzilla for that.
> Please keep in mind that on the already configured node (where before node
> addition there were two connections in place with multipath), actually the
> node addition generates a third connection, added to the existing two,
> using "default" as iSCSI interface (clearly seen if I run "iscsiadm -m
> session -P1") 
>
> Gianluca
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI storage domain and multipath when adding node

2017-04-10 Thread Gianluca Cecchi
On Mon, Apr 10, 2017 at 2:12 PM, Ondrej Svoboda  wrote:

> Gianluca,
>
> I can see that the workaround you describe here (to complete multipath
> configuration in CLI) fixes an inconsistency in observed iSCSI sessions. I
> think it is a shortcoming in oVirt that you had to resort to manual
> configuration. Could you file a bug about this? Ideally, following the bug
> template presented to you by Bugzilla, i.e. "Expected: two iSCSI sessions",
> "Got: one the first node ... one the second node".
>
> Edy, Martin, do you think you could help out here?
>
> Thanks,
> Ondra
>

Ok, this evening I'm going to open a bugzilla for that.
Please keep in mind that on the already configured node (where before node
addition there were two connections in place with multipath), actually the
node addition generates a third connection, added to the existing two,
using "default" as iSCSI interface (clearly seen if I run "iscsiadm -m
session -P1") 

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI storage domain and multipath when adding node

2017-04-10 Thread Ondrej Svoboda
Gianluca,

I can see that the workaround you describe here (to complete multipath
configuration in CLI) fixes an inconsistency in observed iSCSI sessions. I
think it is a shortcoming in oVirt that you had to resort to manual
configuration. Could you file a bug about this? Ideally, following the bug
template presented to you by Bugzilla, i.e. "Expected: two iSCSI sessions",
"Got: one the first node ... one the second node".

Edy, Martin, do you think you could help out here?

Thanks,
Ondra

On Fri, Apr 7, 2017 at 5:21 PM, Gianluca Cecchi 
wrote:

> Hello,
> my configuration is what described here:
> http://lists.ovirt.org/pipermail/users/2017-March/080992.html
>
> So I'm using iSCSI multipath and not bonding
> can anyone reproduce?
>
>
> Initial situation is only one node configured and active with some VMS
>
> I go and configure a second node; it tries to activate but networks are
> not all already mapped and so gies to non operational.
> I setup all networks and activate the node
>
> It happens that:
> - on the first node where I currently have 2 iSCSI connections and
> 2multipath lines (with p1p1.100 and p1p2) it is instantiated a new iSCSI
> SID using interface "default" and in multipath -l output I see now 3 lines
>
> - on the newly added node I only see 1 iSCSI SID using interface default
>
> My way to solve the situation was to go inside iscsi multipath section
> do nothing but save the same config
>
> brutally on first node
> iscsiadm -m session -u
> --> all iscsi sessions are closed
> after a while I see again the original 2 connections recovered, with
> correct interface names used
>
> - on second node
> iscsiadm -m session -u
> --> the only session is cloed
> nothing happens
> if I set to maintenance the node and then activate the node
> --> the 2 correct iscsi sessions are activated...
>
> Thanks
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI storage domain and multipath when adding node

2017-04-07 Thread Gianluca Cecchi
Hello,
my configuration is what described here:
http://lists.ovirt.org/pipermail/users/2017-March/080992.html

So I'm using iSCSI multipath and not bonding
can anyone reproduce?


Initial situation is only one node configured and active with some VMS

I go and configure a second node; it tries to activate but networks are not
all already mapped and so gies to non operational.
I setup all networks and activate the node

It happens that:
- on the first node where I currently have 2 iSCSI connections and
2multipath lines (with p1p1.100 and p1p2) it is instantiated a new iSCSI
SID using interface "default" and in multipath -l output I see now 3 lines

- on the newly added node I only see 1 iSCSI SID using interface default

My way to solve the situation was to go inside iscsi multipath section
do nothing but save the same config

brutally on first node
iscsiadm -m session -u
--> all iscsi sessions are closed
after a while I see again the original 2 connections recovered, with
correct interface names used

- on second node
iscsiadm -m session -u
--> the only session is cloed
nothing happens
if I set to maintenance the node and then activate the node
--> the 2 correct iscsi sessions are activated...

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-17 Thread Elad Ben Aharon
Indeed,
Using ovirt-engine APIs you can edit your iSCSI storage domain and extend it by 
adding physical volumes from your shared storage (The process is managed by 
ovirt-engine, the actual actions on your storage are done by your host which 
has VDSM installed on).

- Original Message -
From: Hans Emmanuel hansemman...@gmail.com
To: Elad Ben Aharon ebena...@redhat.com
Cc: users@ovirt.org
Sent: Friday, January 17, 2014 6:37:56 AM
Subject: Re: [Users] iSCSI storage domain.

Thanks for the reply .


Are you suggesting to use Ovirt Engine to resize iSCSI storage domain ?


On Thu, Jan 16, 2014 at 7:11 PM, Elad Ben Aharon ebena...@redhat.comwrote:

 Hi,

 Both storage types are suitable for production setup.
 As for your second question -
 manually LVM resizing is not recommended, why not using RHEVM for that?

 - Original Message -
 From: Hans Emmanuel hansemman...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, January 16, 2014 3:30:39 PM
 Subject: Re: [Users] iSCSI storage domain.



 Could any one please give valuable suggestions?
 On 16-Jan-2014 12:28 PM, Hans Emmanuel  hansemman...@gmail.com  wrote:



 Hi all,

 I would like to get some comparison on NFS  iSCSI storage domain . Which
 one more suitable for a production setup ? I am planning to use LVM backed
 DRBD replication . And also is that possible to expand iSCSI storage domain
 by simply resizing backend LVM ?

 --
 Hans Emmanuel

 NOthing to FEAR but something to FEEL..


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




-- 
*Hans Emmanuel*

*NOthing to FEAR but something to FEEL..*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-17 Thread Hans Emmanuel
Thanks Elad for explaining .


On Fri, Jan 17, 2014 at 2:55 PM, Elad Ben Aharon ebena...@redhat.comwrote:

 Indeed,
 Using ovirt-engine APIs you can edit your iSCSI storage domain and extend
 it by adding physical volumes from your shared storage (The process is
 managed by ovirt-engine, the actual actions on your storage are done by
 your host which has VDSM installed on).

 - Original Message -
 From: Hans Emmanuel hansemman...@gmail.com
 To: Elad Ben Aharon ebena...@redhat.com
 Cc: users@ovirt.org
 Sent: Friday, January 17, 2014 6:37:56 AM
 Subject: Re: [Users] iSCSI storage domain.

 Thanks for the reply .


 Are you suggesting to use Ovirt Engine to resize iSCSI storage domain ?


 On Thu, Jan 16, 2014 at 7:11 PM, Elad Ben Aharon ebena...@redhat.com
 wrote:

  Hi,
 
  Both storage types are suitable for production setup.
  As for your second question -
  manually LVM resizing is not recommended, why not using RHEVM for that?
 
  - Original Message -
  From: Hans Emmanuel hansemman...@gmail.com
  To: users@ovirt.org
  Sent: Thursday, January 16, 2014 3:30:39 PM
  Subject: Re: [Users] iSCSI storage domain.
 
 
 
  Could any one please give valuable suggestions?
  On 16-Jan-2014 12:28 PM, Hans Emmanuel  hansemman...@gmail.com 
 wrote:
 
 
 
  Hi all,
 
  I would like to get some comparison on NFS  iSCSI storage domain . Which
  one more suitable for a production setup ? I am planning to use LVM
 backed
  DRBD replication . And also is that possible to expand iSCSI storage
 domain
  by simply resizing backend LVM ?
 
  --
  Hans Emmanuel
 
  NOthing to FEAR but something to FEEL..
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 



 --
 *Hans Emmanuel*

 *NOthing to FEAR but something to FEEL..*




-- 
*Hans Emmanuel*

*NOthing to FEAR but something to FEEL..*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-17 Thread Gianluca Cecchi
On Fri, Jan 17, 2014 at 10:25 AM, Elad Ben Aharon  wrote:
 Indeed,
 Using ovirt-engine APIs you can edit your iSCSI storage domain and extend it 
 by adding physical volumes from your shared storage (The process is managed 
 by ovirt-engine, the actual actions on your storage are done by your host 
 which has VDSM installed on).

 - Original Message -
 From: Hans Emmanuel hansemman...@gmail.com
 To: Elad Ben Aharon ebena...@redhat.com
 Cc: users@ovirt.org
 Sent: Friday, January 17, 2014 6:37:56 AM
 Subject: Re: [Users] iSCSI storage domain.

 Thanks for the reply .


 Are you suggesting to use Ovirt Engine to resize iSCSI storage domain ?


 On Thu, Jan 16, 2014 at 7:11 PM, Elad Ben Aharon ebena...@redhat.comwrote:

 Hi,

 Both storage types are suitable for production setup.
 As for your second question -
 manually LVM resizing is not recommended, why not using RHEVM for that?

 - Original Message -
 From: Hans Emmanuel hansemman...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, January 16, 2014 3:30:39 PM
 Subject: Re: [Users] iSCSI storage domain.



 Could any one please give valuable suggestions?
 On 16-Jan-2014 12:28 PM, Hans Emmanuel  hansemman...@gmail.com  wrote:



 Hi all,

 I would like to get some comparison on NFS  iSCSI storage domain . Which
 one more suitable for a production setup ? I am planning to use LVM backed
 DRBD replication . And also is that possible to expand iSCSI storage domain
 by simply resizing backend LVM ?

 --
 Hans Emmanuel


I think one option should be to provide the user, if not already
present/tested/supported, the opportunity to resize the LUN on storage
array and then run a rescan from oVIrt to see the new size and use it
without disruption of service.
To avoid also LUNs proliferation on storage arrays that in general are
providing storage to many sources, also different from oVirt itself
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-17 Thread Gianluca Cecchi
On Fri, Jan 17, 2014 at 10:32 AM, Gianluca Cecchi wrote:


 I think one option should be to provide the user, if not already
 present/tested/supported, the opportunity to resize the LUN on storage
 array and then run a rescan from oVIrt to see the new size and use it
 without disruption of service.
 To avoid also LUNs proliferation on storage arrays that in general are
 providing storage to many sources, also different from oVirt itself
 Gianluca

So something like this for vSphere:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1017662

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-17 Thread Itamar Heim

On 01/17/2014 11:32 AM, Gianluca Cecchi wrote:

On Fri, Jan 17, 2014 at 10:25 AM, Elad Ben Aharon  wrote:

Indeed,
Using ovirt-engine APIs you can edit your iSCSI storage domain and extend it by 
adding physical volumes from your shared storage (The process is managed by 
ovirt-engine, the actual actions on your storage are done by your host which 
has VDSM installed on).

- Original Message -
From: Hans Emmanuel hansemman...@gmail.com
To: Elad Ben Aharon ebena...@redhat.com
Cc: users@ovirt.org
Sent: Friday, January 17, 2014 6:37:56 AM
Subject: Re: [Users] iSCSI storage domain.

Thanks for the reply .


Are you suggesting to use Ovirt Engine to resize iSCSI storage domain ?


On Thu, Jan 16, 2014 at 7:11 PM, Elad Ben Aharon ebena...@redhat.comwrote:


Hi,

Both storage types are suitable for production setup.
As for your second question -
manually LVM resizing is not recommended, why not using RHEVM for that?

- Original Message -
From: Hans Emmanuel hansemman...@gmail.com
To: users@ovirt.org
Sent: Thursday, January 16, 2014 3:30:39 PM
Subject: Re: [Users] iSCSI storage domain.



Could any one please give valuable suggestions?
On 16-Jan-2014 12:28 PM, Hans Emmanuel  hansemman...@gmail.com  wrote:



Hi all,

I would like to get some comparison on NFS  iSCSI storage domain . Which
one more suitable for a production setup ? I am planning to use LVM backed
DRBD replication . And also is that possible to expand iSCSI storage domain
by simply resizing backend LVM ?

--
Hans Emmanuel



I think one option should be to provide the user, if not already
present/tested/supported, the opportunity to resize the LUN on storage
array and then run a rescan from oVIrt to see the new size and use it
without disruption of service.


3.4 adds this:
Bug 961532 - [RFE] Update storage domain's LUNs sizes in DB after lun 
resize


please note:
- you can extend a *storage domain* by adding an extra LUN to it from
  ovirt-engine.
- you *cannot* resize the LUN from ovirt-engine, but you can do that
  from the storage side, which in 3.4 will be reported by ovirt-engine
  via bug 961532

and on the general question - what i like about NFS is its way more 
simple to work with, troubleshoot, inject and recover on issues.


also note, on iscsi, LVM has issues on scale once you pass a several 
hundreds of disks or snapshots (per storage domain).



To avoid also LUNs proliferation on storage arrays that in general are
providing storage to many sources, also different from oVirt itself
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-16 Thread Hans Emmanuel
Could any one please give valuable suggestions?
On 16-Jan-2014 12:28 PM, Hans Emmanuel hansemman...@gmail.com wrote:

 Hi all,

 I would like to get some comparison on NFS   iSCSI storage domain . Which
 one more suitable for  a production setup ? I am planning to use LVM backed
 DRBD replication . And also is that possible to expand iSCSI storage domain
 by simply resizing backend LVM ?

 --
 *Hans Emmanuel*

 *NOthing to FEAR but something to FEEL..*


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-16 Thread Elad Ben Aharon
Hi, 

Both storage types are suitable for production setup. 
As for your second question - 
manually LVM resizing is not recommended, why not using RHEVM for that?

- Original Message -
From: Hans Emmanuel hansemman...@gmail.com
To: users@ovirt.org
Sent: Thursday, January 16, 2014 3:30:39 PM
Subject: Re: [Users] iSCSI storage domain.



Could any one please give valuable suggestions? 
On 16-Jan-2014 12:28 PM, Hans Emmanuel  hansemman...@gmail.com  wrote: 



Hi all, 

I would like to get some comparison on NFS  iSCSI storage domain . Which one 
more suitable for a production setup ? I am planning to use LVM backed DRBD 
replication . And also is that possible to expand iSCSI storage domain by 
simply resizing backend LVM ? 

-- 
Hans Emmanuel 

NOthing to FEAR but something to FEEL.. 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] iSCSI storage domain.

2014-01-16 Thread Hans Emmanuel
Thanks for the reply .


Are you suggesting to use Ovirt Engine to resize iSCSI storage domain ?


On Thu, Jan 16, 2014 at 7:11 PM, Elad Ben Aharon ebena...@redhat.comwrote:

 Hi,

 Both storage types are suitable for production setup.
 As for your second question -
 manually LVM resizing is not recommended, why not using RHEVM for that?

 - Original Message -
 From: Hans Emmanuel hansemman...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, January 16, 2014 3:30:39 PM
 Subject: Re: [Users] iSCSI storage domain.



 Could any one please give valuable suggestions?
 On 16-Jan-2014 12:28 PM, Hans Emmanuel  hansemman...@gmail.com  wrote:



 Hi all,

 I would like to get some comparison on NFS  iSCSI storage domain . Which
 one more suitable for a production setup ? I am planning to use LVM backed
 DRBD replication . And also is that possible to expand iSCSI storage domain
 by simply resizing backend LVM ?

 --
 Hans Emmanuel

 NOthing to FEAR but something to FEEL..


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




-- 
*Hans Emmanuel*

*NOthing to FEAR but something to FEEL..*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] iSCSI storage domain.

2014-01-15 Thread Hans Emmanuel
Hi all,

I would like to get some comparison on NFS   iSCSI storage domain . Which
one more suitable for  a production setup ? I am planning to use LVM backed
DRBD replication . And also is that possible to expand iSCSI storage domain
by simply resizing backend LVM ?

-- 
*Hans Emmanuel*

*NOthing to FEAR but something to FEEL..*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] iSCSI storage domain - expand disk

2013-04-12 Thread martin.kralicek
Hi,

Is there any possible way, how to expand iSCSI disk on engine 3.2.1?
Simple scene: I have already created iSCSI disk with 100GB and I can use it as 
master storage, but now I expand this this via physical storage management to 
150GB.
OK engine now detected that disk has capacity 150GB but the usable disk space 
is still only 100GB.
I think that issue is with partition on this disk. It is possible to do?
What file system is used for iSCSI disk?

Thanks

Martin




This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited.

Where allowed by local law, electronic communications with Accenture and its 
affiliates, including e-mail and instant messaging (including content), may be 
scanned by our systems for the purposes of information security and assessment 
of internal compliance with Accenture policy.

__

www.accenture.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users