Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-18 Thread Mike Christie
On 06/15/2018 12:21 PM, Wladimir Mutel wrote:
> Jason Dillaman wrote:
> 
 [1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/
> 
>>>  I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10
>>> initiator (not Win2016 but hope there is no much difference). I try
>>> to make
>>> it work with a single session first. Also, right now I only have a
>>> single
>>> iSCSI gateway/portal (single host, single IP, single port).
>>> Or is MPIO mandatory to use with Ceph target ?
> 
>> It's mandatory even if you only have a single path since MPIO is
>> responsible for activating the paths.
> 
> Who would know ? I installed MPIO, enabled it for iSCSI
> (required Windows reboot), set MPIO policy to 'Failover only',
> and now my iSCSI target is readable&writable !
> Thanks a lot for your help !
> Probably this should be written with bigger and redder letters
> in Ceph docs.
> 
> Next question, would it be possible for iPXE loader to boot
> from such iSCSI volumes ? I am going to experiment with that
> but if the answer is known in advance, it would be good to know.

It will probably not work with the explicit failover based tcmu-runner
(1.4.rc1 and newer), because iPXE does not have a multipath layer that
supports ALUA.

It might work sometimes because the path that is being used is already
in the active state.

It would probably always work with older versions of tcmu-runner because
we were doing implicit failover and so we did not need any special
multipath layer support.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-15 Thread Wladimir Mutel

Jason Dillaman wrote:


[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/



 I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10
initiator (not Win2016 but hope there is no much difference). I try to make
it work with a single session first. Also, right now I only have a single
iSCSI gateway/portal (single host, single IP, single port).
Or is MPIO mandatory to use with Ceph target ?



It's mandatory even if you only have a single path since MPIO is
responsible for activating the paths.


Who would know ? I installed MPIO, enabled it for iSCSI
(required Windows reboot), set MPIO policy to 'Failover only',
and now my iSCSI target is readable&writable !
Thanks a lot for your help !
Probably this should be written with bigger and redder letters
in Ceph docs.

Next question, would it be possible for iPXE loader to boot
from such iSCSI volumes ? I am going to experiment with that
but if the answer is known in advance, it would be good to know.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-15 Thread Jason Dillaman
On Fri, Jun 15, 2018 at 6:19 AM, Wladimir Mutel  wrote:
> Jason Dillaman wrote:
>
>>> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121
>>> [DEBUG] dbus_name_acquired:461: name org.kernel.TCMUService1 acquired
>>> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521 54121
>>> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a
>>> 0 3f 0 c0 0
>>> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.523 54121
>>> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a
>>> 0 3f 0 c0 0
>>> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.543 54121
>>> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e
>>> 10 0 0 0 0 0 0 0 0 0 0 0 c 0 0
>>> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.550 54121
>>> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e
>>> 10 0 0 0 0 0 0 0 0 0 0 0 c 0 0
>>> чер 13 08:38:47 p10s tcmu-runner[54121]: 2018-06-13 08:38:47.944 54121
>>> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e
>>> 10 0 0 0 0 0 0 0 0 0 0 0 c 0 0
>
>
>>>  Wikipedia says that 1A is 'mode sense' and 9e is 'service action
>>> in'.
>>>  These records are logged when I try to put the disk online or
>>> initialize it with GPT/MBR partition table in Windows Disk Management (and
>>> Windows report errors after that)
>>>  What to check next ? Any importance of missing 'SSD' device
>>> class ?
>
>
>> Did you configure MPIO within Windows [1]? Any errors recorded in the
>> Windows Event Viewer?
>
>
>> The "SSD" device class isn't important -- it's just a way to describe
>> the LUN as being backed by non-rotational media (e.g. VMware will show
>> a different icon).
>
>
>> [1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/
>
>
> I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10
> initiator (not Win2016 but hope there is no much difference). I try to make
> it work with a single session first. Also, right now I only have a single
> iSCSI gateway/portal (single host, single IP, single port).
> Or is MPIO mandatory to use with Ceph target ?

It's mandatory even if you only have a single path since MPIO is
responsible for activating the paths.

>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-15 Thread Wladimir Mutel

Jason Dillaman wrote:


чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121 [DEBUG] 
dbus_name_acquired:461: name org.kernel.TCMUService1 acquired
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a 0 
3f 0 c0 0
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.523 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a 0 
3f 0 c0 0
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.543 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 10 
0 0 0 0 0 0 0 0 0 0 0 c 0 0
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.550 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 10 
0 0 0 0 0 0 0 0 0 0 0 c 0 0
чер 13 08:38:47 p10s tcmu-runner[54121]: 2018-06-13 08:38:47.944 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 10 
0 0 0 0 0 0 0 0 0 0 0 c 0 0



 Wikipedia says that 1A is 'mode sense' and 9e is 'service action in'.
 These records are logged when I try to put the disk online or 
initialize it with GPT/MBR partition table in Windows Disk Management (and 
Windows report errors after that)
 What to check next ? Any importance of missing 'SSD' device class ?



Did you configure MPIO within Windows [1]? Any errors recorded in the
Windows Event Viewer?



The "SSD" device class isn't important -- it's just a way to describe
the LUN as being backed by non-rotational media (e.g. VMware will show
a different icon).



[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/


	I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10 
initiator (not Win2016 but hope there is no much difference). I try to 
make it work with a single session first. Also, right now I only have a 
single iSCSI gateway/portal (single host, single IP, single port).

Or is MPIO mandatory to use with Ceph target ?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-13 Thread Jason Dillaman
On Wed, Jun 13, 2018 at 8:33 AM, Wladimir Mutel  wrote:
> On Tue, Jun 12, 2018 at 10:39:59AM -0400, Jason Dillaman wrote:
>
>> > So, my usual question is - where to look and what logs to enable
>> > to find out what is going wrong ?
>
>> If not overridden, tcmu-runner will default to 'client.admin' [1] so
>> you shouldn't need to add any additional caps. In the short-term to
>> debug your issue, you can perhaps increase the log level for
>> tcmu-runner to see if it's showing an error [2].
>
> So, I put 'log_level = 5' into /etc/tcmu/tcmu.conf , restart 
> tcmu0runner and see only this in its logs :
>
> чер 13 08:38:14 p10s systemd[1]: Starting LIO Userspace-passthrough daemon...
> чер 13 08:38:14 p10s tcmu-runner[54121]: Inotify is watching 
> "/etc/tcmu/tcmu.conf", wd: 1, mask: IN_ALL_EVENTS
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.634 54121 
> [DEBUG] load_our_module:531: Module 'target_core_user' is already loaded
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.634 54121 
> [DEBUG] main:1087: handler path: /usr/lib/x86_64-linux-gnu/tcmu-runner
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.657 54121 
> [DEBUG] main:1093: 2 runner handlers found
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 
> [DEBUG] tcmu_block_device:404 rbd/libvirt.tower-prime-e-3tb: blocking kernel 
> device
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 
> [DEBUG] tcmu_block_device:410 rbd/libvirt.tower-prime-e-3tb: block done
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 
> [DEBUG] dev_added:769 rbd/libvirt.tower-prime-e-3tb: Got block_size 512, size 
> in bytes 3000596692992
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 
> [DEBUG] tcmu_rbd_open:829 rbd/libvirt.tower-prime-e-3tb: tcmu_rbd_open config 
> rbd/libvirt/tower-prime-e-3tb;osd_op_timeout=30 block size 512 num lbas 
> 5860540416.
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.665 54121 
> [DEBUG] timer_check_and_set_def:383 rbd/libvirt.tower-prime-e-3tb: The 
> cluster's default osd op timeout(0.00), osd heartbeat grace(20) 
> interval(6)
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.672 54121 
> [DEBUG] tcmu_rbd_detect_device_class:300 rbd/libvirt.tower-prime-e-3tb: Pool 
> libvirt using crush rule "replicated_rule"
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.672 54121 
> [DEBUG] tcmu_rbd_detect_device_class:316 rbd/libvirt.tower-prime-e-3tb: SSD 
> not a registered device class.
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.715 7f30e08c9880 
>  1 mgrc service_daemon_register tcmu-runner.p10s:libvirt/tower-prime-e-3tb 
> metadata {arch=x86_64,ceph_release=mimic,ceph_version=ceph version 13.2.0 
> (79a10589f1f80dfe21e8f9794365ed98143071c4) mimic 
> (stable),ceph_version_short=13.2.0,cpu=Intel(R) Xeon(R) CPU E3-1235L v5 @ 
> 2.00GHz,distro=ubuntu,distro_description=Ubuntu 18.04 
> LTS,distro_version=18.04,hostname=p10s,image_id=25c21238e1f29,image_name=tower-prime-e-3tb,kernel_description=#201806032231
>  SMP Sun Jun 3 22:33:34 UTC 
> 2018,kernel_version=4.17.0-041700-generic,mem_swap_kb=15622140,mem_total_kb=65827836,os=Linux,pool_name=libvirt}
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.721 54121 
> [DEBUG] tcmu_unblock_device:422 rbd/libvirt.tower-prime-e-3tb: unblocking 
> kernel device
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.721 54121 
> [DEBUG] tcmu_unblock_device:428 rbd/libvirt.tower-prime-e-3tb: unblock done
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.724 54121 
> [DEBUG] dbus_bus_acquired:445: bus org.kernel.TCMUService1 acquired
> чер 13 08:38:14 p10s systemd[1]: Started LIO Userspace-passthrough daemon.
> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121 
> [DEBUG] dbus_name_acquired:461: name org.kernel.TCMUService1 acquired
> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521 54121 
> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a 0 
> 3f 0 c0 0
> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.523 54121 
> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a 0 
> 3f 0 c0 0
> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.543 54121 
> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 
> 10 0 0 0 0 0 0 0 0 0 0 0 c 0 0
> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.550 54121 
> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 
> 10 0 0 0 0 0 0 0 0 0 0 0 c 0 0
> чер 13 08:38:47 p10s tcmu-runner[54121]: 2018-06-13 08:38:47.944 54121 
> [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 
> 10 0 0 0 0 0 0 0 0 0 0 0 c 0 0
>
> Wikipedia says that 1A is 'mode sense' and 9e is 'service action in'.
> These records are logged when I try to 

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-13 Thread Wladimir Mutel
On Tue, Jun 12, 2018 at 10:39:59AM -0400, Jason Dillaman wrote:

> > So, my usual question is - where to look and what logs to enable
> > to find out what is going wrong ?

> If not overridden, tcmu-runner will default to 'client.admin' [1] so
> you shouldn't need to add any additional caps. In the short-term to
> debug your issue, you can perhaps increase the log level for
> tcmu-runner to see if it's showing an error [2].

So, I put 'log_level = 5' into /etc/tcmu/tcmu.conf , restart 
tcmu0runner and see only this in its logs :

чер 13 08:38:14 p10s systemd[1]: Starting LIO Userspace-passthrough daemon...
чер 13 08:38:14 p10s tcmu-runner[54121]: Inotify is watching 
"/etc/tcmu/tcmu.conf", wd: 1, mask: IN_ALL_EVENTS
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.634 54121 [DEBUG] 
load_our_module:531: Module 'target_core_user' is already loaded
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.634 54121 [DEBUG] 
main:1087: handler path: /usr/lib/x86_64-linux-gnu/tcmu-runner
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.657 54121 [DEBUG] 
main:1093: 2 runner handlers found
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 [DEBUG] 
tcmu_block_device:404 rbd/libvirt.tower-prime-e-3tb: blocking kernel device
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 [DEBUG] 
tcmu_block_device:410 rbd/libvirt.tower-prime-e-3tb: block done
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 [DEBUG] 
dev_added:769 rbd/libvirt.tower-prime-e-3tb: Got block_size 512, size in bytes 
3000596692992
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.658 54121 [DEBUG] 
tcmu_rbd_open:829 rbd/libvirt.tower-prime-e-3tb: tcmu_rbd_open config 
rbd/libvirt/tower-prime-e-3tb;osd_op_timeout=30 block size 512 num lbas 
5860540416.
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.665 54121 [DEBUG] 
timer_check_and_set_def:383 rbd/libvirt.tower-prime-e-3tb: The cluster's 
default osd op timeout(0.00), osd heartbeat grace(20) interval(6)
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.672 54121 [DEBUG] 
tcmu_rbd_detect_device_class:300 rbd/libvirt.tower-prime-e-3tb: Pool libvirt 
using crush rule "replicated_rule"
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.672 54121 [DEBUG] 
tcmu_rbd_detect_device_class:316 rbd/libvirt.tower-prime-e-3tb: SSD not a 
registered device class.
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.715 7f30e08c9880  
1 mgrc service_daemon_register tcmu-runner.p10s:libvirt/tower-prime-e-3tb 
metadata {arch=x86_64,ceph_release=mimic,ceph_version=ceph version 13.2.0 
(79a10589f1f80dfe21e8f9794365ed98143071c4) mimic 
(stable),ceph_version_short=13.2.0,cpu=Intel(R) Xeon(R) CPU E3-1235L v5 @ 
2.00GHz,distro=ubuntu,distro_description=Ubuntu 18.04 
LTS,distro_version=18.04,hostname=p10s,image_id=25c21238e1f29,image_name=tower-prime-e-3tb,kernel_description=#201806032231
 SMP Sun Jun 3 22:33:34 UTC 
2018,kernel_version=4.17.0-041700-generic,mem_swap_kb=15622140,mem_total_kb=65827836,os=Linux,pool_name=libvirt}
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.721 54121 [DEBUG] 
tcmu_unblock_device:422 rbd/libvirt.tower-prime-e-3tb: unblocking kernel device
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.721 54121 [DEBUG] 
tcmu_unblock_device:428 rbd/libvirt.tower-prime-e-3tb: unblock done
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.724 54121 [DEBUG] 
dbus_bus_acquired:445: bus org.kernel.TCMUService1 acquired
чер 13 08:38:14 p10s systemd[1]: Started LIO Userspace-passthrough daemon.
чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121 [DEBUG] 
dbus_name_acquired:461: name org.kernel.TCMUService1 acquired
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a 0 
3f 0 c0 0
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.523 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 1a 0 
3f 0 c0 0
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.543 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 10 
0 0 0 0 0 0 0 0 0 0 0 c 0 0
чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.550 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 10 
0 0 0 0 0 0 0 0 0 0 0 c 0 0
чер 13 08:38:47 p10s tcmu-runner[54121]: 2018-06-13 08:38:47.944 54121 
[DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205 rbd/libvirt.tower-prime-e-3tb: 9e 10 
0 0 0 0 0 0 0 0 0 0 0 c 0 0

Wikipedia says that 1A is 'mode sense' and 9e is 'service action in'. 
These records are logged when I try to put the disk online or 
initialize it with GPT/MBR partition table in Windows Disk Management (and 
Windows report errors after that)
What to check next ? Any importance of missing 'SSD' device class ?

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-12 Thread Jason Dillaman
On Tue, Jun 12, 2018 at 5:30 AM, Wladimir Mutel  wrote:
> Hi everyone again,
>
> I continue set up of my testing Ceph cluster (1-node so far).
> I changed 'chooseleaf' from 'host' to 'osd' in CRUSH map
> to make it run healthy on 1 node. For the same purpose,
> I also set 'minimum_gateways = 1' for Ceph iSCSI gateway.
> Also I upgraded Ubuntu 18.04 kernel to mainline v4.17 to get
> up-to-date iSCSI attributes support required by gwcli
> (qfull_time_out and probably something else).
>
> I was able to add client host IQNs and configure their CHAP
> authentication. I was able to add iSCSI LUNs referring to RBD
> images, and to assign LUNs to clients. 'gwcli ls /' and
> 'targetcli ls /' show nice diagrams without signs of errors.
> iSCSI initiators from Windows 10 and 2008 R2 can log in to the
> portal with CHAP auth and list their assigned LUNs.
> And authenticated sessions are also shown in '*cli ls' printout
>
> But:
>
> in Windows disk management, mapped LUN is shown in 'offline'
> state. When I try to bring it online or to initalize the disk
> with MBR or GPT partition table, I get messages like
> 'device not ready' on Win10 or 'driver detected controller error
>  on \device\harddisk\dr5' or the like.
>
> So, my usual question is - where to look and what logs to enable
> to find out what is going wrong ?
>
> My setup specifics are that I create my RBDs in non-default pool
> ('libvirt' instead of 'rbd'). Also I create them with erasure
> data-pool (called it 'jerasure21' as was configured in default
> erasure profile). Should I add explicit access to these pools
> to some Ceph client I don't know ? I know that 'gwcli'
> logs into Ceph as 'client.admin' but I am not sure
> about tcmu-runner and/or user:rbd backstore provider.

If not overridden, tcmu-runner will default to 'client.admin' [1] so
you shouldn't need to add any additional caps. In the short-term to
debug your issue, you can perhaps increase the log level for
tcmu-runner to see if it's showing an error [2].

> Thank you in advance for your useful directions
> out of my problem.
>
> Wladimir Mutel wrote:
>
>>> Failed : disk create/update failed on p10s. LUN allocation failure
>
>
>> Well, this was fixed by updating kernel to v4.17 from Ubuntu
>> kernel/mainline PPA
>> Going on...
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[1] 
https://github.com/ceph/ceph-iscsi-config/blob/master/ceph_iscsi_config/settings.py#L28
[2] https://github.com/open-iscsi/tcmu-runner/blob/master/tcmu.conf#L12

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-12 Thread Wladimir Mutel

Hi everyone again,

I continue set up of my testing Ceph cluster (1-node so far).
I changed 'chooseleaf' from 'host' to 'osd' in CRUSH map
to make it run healthy on 1 node. For the same purpose,
I also set 'minimum_gateways = 1' for Ceph iSCSI gateway.
Also I upgraded Ubuntu 18.04 kernel to mainline v4.17 to get
up-to-date iSCSI attributes support required by gwcli
(qfull_time_out and probably something else).

I was able to add client host IQNs and configure their CHAP
authentication. I was able to add iSCSI LUNs referring to RBD
images, and to assign LUNs to clients. 'gwcli ls /' and
'targetcli ls /' show nice diagrams without signs of errors.
iSCSI initiators from Windows 10 and 2008 R2 can log in to the
portal with CHAP auth and list their assigned LUNs.
And authenticated sessions are also shown in '*cli ls' printout

But:

in Windows disk management, mapped LUN is shown in 'offline'
state. When I try to bring it online or to initalize the disk
with MBR or GPT partition table, I get messages like
'device not ready' on Win10 or 'driver detected controller error
 on \device\harddisk\dr5' or the like.

So, my usual question is - where to look and what logs to enable
to find out what is going wrong ?

My setup specifics are that I create my RBDs in non-default pool
('libvirt' instead of 'rbd'). Also I create them with erasure
data-pool (called it 'jerasure21' as was configured in default
erasure profile). Should I add explicit access to these pools
to some Ceph client I don't know ? I know that 'gwcli'
logs into Ceph as 'client.admin' but I am not sure
about tcmu-runner and/or user:rbd backstore provider.

Thank you in advance for your useful directions
out of my problem.

Wladimir Mutel wrote:


Failed : disk create/update failed on p10s. LUN allocation failure



Well, this was fixed by updating kernel to v4.17 from Ubuntu 
kernel/mainline PPA
Going on...

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-04 Thread Wladimir Mutel
On Mon, Jun 04, 2018 at 11:12:58AM +0300, Wladimir Mutel wrote:
>   /disks> create pool=rbd image=win2016-3tb-1 size=2861589M 
> CMD: /disks/ create pool=rbd image=win2016-3tb-1 size=2861589M count=1 
> max_data_area_mb=None
> pool 'rbd' is ok to use
> Creating/mapping disk rbd/win2016-3tb-1
> Issuing disk create request
> Failed : disk create/update failed on p10s. LUN allocation failure

>   Surely I could investigate what is happening by studying gwcli sources,
>   but if anyone already knows how to fix that, I would appreciate your 
> response.

Well, this was fixed by updating kernel to v4.17 from Ubuntu 
kernel/mainline PPA
Going on...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-04 Thread Wladimir Mutel
On Fri, Jun 01, 2018 at 08:20:12PM +0300, Wladimir Mutel wrote:
> 
>   And still, when I do '/disks create ...' in gwcli, it says
>   that it wants 2 existing gateways. Probably this is related
>   to the created 2-TPG structure and I should look for more ways
>   to 'improve' that json config so that rbd-target-gw loads it
>   as I need on single host.

Well, I decided to bond my network interfaces and assign a single IP on 
them (as mchristi@ suggested)
Also I put 'minimum_gateways = 1' into /etc/ceph/iscsi-gateway.cfg and 
got rid of 'At least 2 gateways required' in gwcli
But now I have one more stumble :

gwcli -d
Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
Refreshing disk information from the config object
- Scanning will use 8 scan threads
- rbd image scan complete: 0s
Refreshing gateway & client information
- checking iSCSI/API ports on p10s
Querying ceph for state information
Gathering pool stats for cluster 'ceph'

/disks> create pool=rbd image=win2016-3tb-1 size=2861589M 
CMD: /disks/ create pool=rbd image=win2016-3tb-1 size=2861589M count=1 
max_data_area_mb=None
pool 'rbd' is ok to use
Creating/mapping disk rbd/win2016-3tb-1
Issuing disk create request
Failed : disk create/update failed on p10s. LUN allocation failure

Surely I could investigate what is happening by studying gwcli sources,
but if anyone already knows how to fix that, I would appreciate your 
response.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Mike Christie
On 06/01/2018 02:01 AM, Wladimir Mutel wrote:
> Dear all,
> 
> I am experimenting with Ceph setup. I set up a single node
> (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs,
> Ubuntu 18.04 Bionic, Ceph packages from
> http://download.ceph.com/debian-luminous/dists/xenial/
> and iscsi parts built manually per
> http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/)
> Also i changed 'chooseleaf ... host' into 'chooseleaf ... osd'
> in the CRUSH map to run with single host.
> 
> I have both its Ethernets connected to the same LAN,
> with different IPs in the same subnet
> (like, 192.168.200.230/24 and 192.168.200.231/24)
> mon_host in ceph.conf is set to 192.168.200.230,
> and ceph daemons (mgr, mon, osd) are listening to this IP.
> 
> What I would like to finally achieve, is to provide multipath
> iSCSI access through both these Ethernets to Ceph RBDs,
> and apparently, gwcli does not allow me to add a second
> gateway to the same target. It is going like this :
> 
> /iscsi-target> create iqn.2018-06.host.test:test
> ok
> /iscsi-target> cd iqn.2018-06.host.test:test/gateways
> /iscsi-target...test/gateways> create p10s 192.168.200.230 skipchecks=true
> OS version/package checks have been bypassed
> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
> ok
> /iscsi-target...test/gateways> create p10s2 192.168.200.231 skipchecks=true
> OS version/package checks have been bypassed
> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
> Failed : Gateway creation failed, gateway(s)
> unavailable:192.168.200.231(UNKNOWN state)
> 
> host names are defined in /etc/hosts as follows :
> 
> 192.168.200.230 p10s
> 192.168.200.231 p10s2
> 
> so I suppose that something does not listen on 192.168.200.231, but
> I don't have an idea what is that thing and how to make it listen there.
> Or how to achieve this goal (utilization of both Ethernets for iSCSI) in
> different way. Shoud I aggregate Ethernets into a 'bond' interface with

There are multiple issues here:

1. LIO does not really support multiple IPs on the same subnet on the
same system out of the box. The network routing will kick in and
sometimes if the initiator sent something to .230, the target would
respond from .231 and I think for operations like logins it will not go
as planned in the iscsi target layer as the code that manages
connections gets thrown off. On the initiator side you it works when
using ifaces because we use SO_BINDTODEVICE to tell the net layer to use
the specific netdev, but there is no code like that in the target. So on
the target, I think it just depends on the routing table setup and you
have to modify that. I think there might be a bug though.

In general I think different subnet is easiest and best for most cases.

2. Ceph-iscsi does not support multiple IPs on the same gw right now,
because you can hit the issue where a WRITE is sent down path1, that
gets stuck, then the initiator fails over to path2 and sends the STPG
there. That will go down a different path and so the WRITE in path 1 is
not flushed like we need. Because both paths are accessing the same rbd
client then the rbd locking/blacklisting would not kick in like when
this is done on different gws.

So for both you would/could just use networ level bonding.

> single IP ? Should I build and use 'lrbd' tool instead of 'gwcli' ? Is

Or you can use lrbd but for that make sure you are using the SUSE kernel
as they have the special timeout code.

> it acceptable that I run kernel 4.15, not 4.16+ ?
> What other directions could you give me on this task ?
> Thanks in advance for your replies.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Jason Dillaman
The (1) TPG per gateway is expected behavior since that is how ALUA
active/passive is configured.

On Fri, Jun 1, 2018 at 1:20 PM, Wladimir Mutel  wrote:
> Ok, I looked into Python sources of ceph-iscsi-{cli,config} and
> found that per-host configuration sections use short host name
> (returned by this_host() function) as their primary key.
> So I can't trick gwcli with alternative host name like p10s2
> which I put into /etc/hosts to denote my second IP,
> as this_host() calls gethostname() and further code
> disregards alternative host names at all.
> I added 192.168.201.231 into trusted_ip_list,
> but after 'create p10s2 192.168.201.231 skipchecks=true'
> I got KeyError 'p10s2' in gwcli/gateway.py line 571
>
> Fortunately, I found a way to edit Ceph iSCSI configuration
> as a text file (rados --pool rbd get gateway.conf gateway.conf)
> I added needed IP to the appropriate json lists
> (."gateways"."ip_list" and."gateways"."p10s"."gateway_ip_list"),
> put the file back into RADOS and restarted rbd-target-gw
> in the hope everything will go well
>
> Unfortunately, I found (by running 'targetcli ls')
> that now it creates 2 TPGs with single IP portal in each of them
> Also, it disables 1st TPG but enables 2nd one, like this :
>
>   o- iscsi  [Targets: 1]
>   | o- iqn.2018-06.domain.p10s:p10s [TPGs: 2]
>   |   o- tpg1   [disabled]
>   |   | o- portals  [Portals: 1]
>   |   |   o- 192.168.200.230:3260   [OK]
>   |   o- tpg2   [no-gen-acls, no-auth]
>   | o- portals  [Portals: 1]
>   |   o- 192.168.201.231:3260   [OK]
>
> And still, when I do '/disks create ...' in gwcli, it says
> that it wants 2 existing gateways. Probably this is related
> to the created 2-TPG structure and I should look for more ways
> to 'improve' that json config so that rbd-target-gw loads it
> as I need on single host.
>
>
>
> Wladimir Mutel wrote:
>>
>>  Well, ok, I moved second address into different subnet
>> (192.168.201.231/24) and also reflected that in 'hosts' file
>>
>>  But that did not help much :
>>
>> /iscsi-target...test/gateways> create p10s2 192.168.201.231
>> skipchecks=true
>> OS version/package checks have been bypassed
>> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
>> Failed : Gateway creation failed, gateway(s)
>> unavailable:192.168.201.231(UNKNOWN state)
>>
>> /disks> create pool=replicated image=win2016-3gb size=2861589M
>> Failed : at least 2 gateways must exist before disk operations are
>> permitted
>>
>>  I see this mentioned in Ceph-iSCSI-CLI GitHub issues
>> https://github.com/ceph/ceph-iscsi-cli/issues/54 and
>> https://github.com/ceph/ceph-iscsi-cli/issues/59
>>  but apparently without a solution
>>
>>  So, would anybody propose an idea
>>  on how can I start using iSCSI over Ceph acheap?
>>  With the single P10S host I have in my hands right now?
>>
>>  Additional host and 10GBE hardware would require additional
>>  funding, which would possible only in some future.
>>
>>  Thanks in advance for your responses
>>
>> Wladimir Mutel wrote:
>>
>>>  I have both its Ethernets connected to the same LAN,
>>>  with different IPs in the same subnet
>>>  (like, 192.168.200.230/24 and 192.168.200.231/24)
>>
>>
>>> 192.168.200.230 p10s
>>> 192.168.200.231 p10s2
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Wladimir Mutel

Ok, I looked into Python sources of ceph-iscsi-{cli,config} and
found that per-host configuration sections use short host name
(returned by this_host() function) as their primary key.
So I can't trick gwcli with alternative host name like p10s2
which I put into /etc/hosts to denote my second IP,
as this_host() calls gethostname() and further code
disregards alternative host names at all.
I added 192.168.201.231 into trusted_ip_list,
but after 'create p10s2 192.168.201.231 skipchecks=true'
I got KeyError 'p10s2' in gwcli/gateway.py line 571

Fortunately, I found a way to edit Ceph iSCSI configuration
as a text file (rados --pool rbd get gateway.conf gateway.conf)
I added needed IP to the appropriate json lists
(."gateways"."ip_list" and."gateways"."p10s"."gateway_ip_list"),
put the file back into RADOS and restarted rbd-target-gw
in the hope everything will go well

Unfortunately, I found (by running 'targetcli ls')
that now it creates 2 TPGs with single IP portal in each of them
Also, it disables 1st TPG but enables 2nd one, like this :

  o- iscsi  [Targets: 1]
  | o- iqn.2018-06.domain.p10s:p10s [TPGs: 2]
  |   o- tpg1   [disabled]
  |   | o- portals  [Portals: 1]
  |   |   o- 192.168.200.230:3260   [OK]
  |   o- tpg2   [no-gen-acls, no-auth]
  | o- portals  [Portals: 1]
  |   o- 192.168.201.231:3260   [OK]

And still, when I do '/disks create ...' in gwcli, it says
that it wants 2 existing gateways. Probably this is related
to the created 2-TPG structure and I should look for more ways
to 'improve' that json config so that rbd-target-gw loads it
as I need on single host.


Wladimir Mutel wrote:
 Well, ok, I moved second address into different subnet 
(192.168.201.231/24) and also reflected that in 'hosts' file


 But that did not help much :

/iscsi-target...test/gateways> create p10s2 192.168.201.231 skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Gateway creation failed, gateway(s) 
unavailable:192.168.201.231(UNKNOWN state)


/disks> create pool=replicated image=win2016-3gb size=2861589M
Failed : at least 2 gateways must exist before disk operations are 
permitted


 I see this mentioned in Ceph-iSCSI-CLI GitHub issues
https://github.com/ceph/ceph-iscsi-cli/issues/54 and
https://github.com/ceph/ceph-iscsi-cli/issues/59
 but apparently without a solution

 So, would anybody propose an idea
 on how can I start using iSCSI over Ceph acheap?
 With the single P10S host I have in my hands right now?

 Additional host and 10GBE hardware would require additional
 funding, which would possible only in some future.

 Thanks in advance for your responses

Wladimir Mutel wrote:


 I have both its Ethernets connected to the same LAN,
 with different IPs in the same subnet
 (like, 192.168.200.230/24 and 192.168.200.231/24)



192.168.200.230 p10s
192.168.200.231 p10s2


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Jason Dillaman
Are your firewall ports open for rbd-target-api? Is the process
running on the other host? If you run "gwcli -d" and try to add the
second gateway, what messages do you see?

On Fri, Jun 1, 2018 at 8:15 AM, Wladimir Mutel  wrote:
> Well, ok, I moved second address into different subnet
> (192.168.201.231/24) and also reflected that in 'hosts' file
>
> But that did not help much :
>
> /iscsi-target...test/gateways> create p10s2 192.168.201.231 skipchecks=true
> OS version/package checks have been bypassed
> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
> Failed : Gateway creation failed, gateway(s)
> unavailable:192.168.201.231(UNKNOWN state)
>
> /disks> create pool=replicated image=win2016-3gb size=2861589M
> Failed : at least 2 gateways must exist before disk operations are permitted
>
> I see this mentioned in Ceph-iSCSI-CLI GitHub issues
> https://github.com/ceph/ceph-iscsi-cli/issues/54 and
> https://github.com/ceph/ceph-iscsi-cli/issues/59
> but apparently without a solution
>
> So, would anybody propose an idea
> on how can I start using iSCSI over Ceph acheap?
> With the single P10S host I have in my hands right now?
>
> Additional host and 10GBE hardware would require additional
> funding, which would possible only in some future.
>
> Thanks in advance for your responses
>
> Wladimir Mutel wrote:
>
>>  I have both its Ethernets connected to the same LAN,
>>  with different IPs in the same subnet
>>  (like, 192.168.200.230/24 and 192.168.200.231/24)
>
>
>> 192.168.200.230 p10s
>> 192.168.200.231 p10s2
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Wladimir Mutel
	Well, ok, I moved second address into different subnet 
(192.168.201.231/24) and also reflected that in 'hosts' file


But that did not help much :

/iscsi-target...test/gateways> create p10s2 192.168.201.231 skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Gateway creation failed, gateway(s) 
unavailable:192.168.201.231(UNKNOWN state)


/disks> create pool=replicated image=win2016-3gb size=2861589M
Failed : at least 2 gateways must exist before disk operations are permitted

I see this mentioned in Ceph-iSCSI-CLI GitHub issues
https://github.com/ceph/ceph-iscsi-cli/issues/54 and
https://github.com/ceph/ceph-iscsi-cli/issues/59
but apparently without a solution

So, would anybody propose an idea
on how can I start using iSCSI over Ceph acheap?
With the single P10S host I have in my hands right now?

Additional host and 10GBE hardware would require additional
funding, which would possible only in some future.

Thanks in advance for your responses

Wladimir Mutel wrote:


 I have both its Ethernets connected to the same LAN,
 with different IPs in the same subnet
 (like, 192.168.200.230/24 and 192.168.200.231/24)



192.168.200.230 p10s
192.168.200.231 p10s2


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread John Hearns
It is worth asking - why do you want to have two interfaces?
If you have 1Gbps interfaces and this is a bandwidth requirement then
10Gbps cards and switches are very cheap these days.

On 1 June 2018 at 10:37, Panayiotis Gotsis  wrote:

> Hello
>
> Bonding and iscsi are not a best practice architecture. Multipath is,
> however I can attest to problems with the multipathd and debian.
>
> In any case, what you should try to do and check is:
>
> 1) Use two vlans, one for each ethernet port, with different ip
> address space. Your initiators on the hosts will then be able to
> discover two iscsi targets.
> 2) You should ensure that ping between host interfaces and iscsi
> targets is working. You should ensure that the iscsi target daemon is
> up (through the use of netstat for example) for each one of the two
> ip addresses/ethernet interfaces
> 3) Check multipath configuration
>
>
> On 18-06-01 05:08 +0200, Marc Roos wrote:
>
>>
>>
>> Indeed, you have to add routes and rules to routing table. Just bond
>> them.
>>
>>
>> -Original Message-
>> From: John Hearns [mailto:hear...@googlemail.com]
>> Sent: vrijdag 1 juni 2018 10:00
>> To: ceph-users
>> Subject: Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters -
>> how to ?
>>
>> Errr   is this very wise ?
>>
>> I have both its Ethernets connected to the same LAN,
>>with different IPs in the same subnet
>>(like, 192.168.200.230/24 and 192.168.200.231/24)
>>
>>
>> In my experience setting up to interfaces on the same subnet means that
>> your ssystem doesnt know which one to route traffic through...
>>
>>
>>
>>
>>
>>
>>
>> On 1 June 2018 at 09:01, Wladimir Mutel  wrote:
>>
>>
>> Dear all,
>>
>> I am experimenting with Ceph setup. I set up a single node
>> (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA
>> HDDs,
>> Ubuntu 18.04 Bionic, Ceph packages from
>> http://download.ceph.com/debian-luminous/dists/xenial/
>> <http://download.ceph.com/debian-luminous/dists/xenial/>
>> and iscsi parts built manually per
>> http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual
>> -instal
>> l/
>> <http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/> )
>> Also i changed 'chooseleaf ... host' into 'chooseleaf ...
>> osd'
>> in the CRUSH map to run with single host.
>>
>> I have both its Ethernets connected to the same LAN,
>> with different IPs in the same subnet
>> (like, 192.168.200.230/24 and 192.168.200.231/24)
>> mon_host in ceph.conf is set to 192.168.200.230,
>> and ceph daemons (mgr, mon, osd) are listening to this IP.
>>
>> What I would like to finally achieve, is to provide
>> multipath
>> iSCSI access through both these Ethernets to Ceph RBDs,
>> and apparently, gwcli does not allow me to add a second
>> gateway to the same target. It is going like this :
>>
>> /iscsi-target> create iqn.2018-06.host.test:test
>> ok
>> /iscsi-target> cd iqn.2018-06.host.test:test/gateways
>> /iscsi-target...test/gateways> create p10s 192.168.200.230
>> skipchecks=true
>> OS version/package checks have been bypassed
>> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
>> ok
>> /iscsi-target...test/gateways> create p10s2 192.168.200.231
>> skipchecks=true
>> OS version/package checks have been bypassed
>> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
>> Failed : Gateway creation failed, gateway(s)
>> unavailable:192.168.200.231(UNKNOWN state)
>>
>> host names are defined in /etc/hosts as follows :
>>
>> 192.168.200.230 p10s
>> 192.168.200.231 p10s2
>>
>> so I suppose that something does not listen on
>> 192.168.200.231, but I don't have an idea what is that thing and how to
>> make it listen there. Or how to achieve this goal (utilization of both
>> Ethernets for iSCSI) in different way. Shoud I aggregate Ethernets into
>> a 'bond' interface with single IP ? Should I build and use 'lrbd' tool
>> instead of 'gwcli' ? Is it acceptable that I run kernel 

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Panayiotis Gotsis

Hello

Bonding and iscsi are not a best practice architecture. Multipath is,
however I can attest to problems with the multipathd and debian.

In any case, what you should try to do and check is:

1) Use two vlans, one for each ethernet port, with different ip
address space. Your initiators on the hosts will then be able to
discover two iscsi targets.
2) You should ensure that ping between host interfaces and iscsi
targets is working. You should ensure that the iscsi target daemon is
up (through the use of netstat for example) for each one of the two
ip addresses/ethernet interfaces
3) Check multipath configuration

On 18-06-01 05:08 +0200, Marc Roos wrote:



Indeed, you have to add routes and rules to routing table. Just bond
them.


-Original Message-
From: John Hearns [mailto:hear...@googlemail.com]
Sent: vrijdag 1 juni 2018 10:00
To: ceph-users
Subject: Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters -
how to ?

Errr   is this very wise ?

I have both its Ethernets connected to the same LAN,
   with different IPs in the same subnet
   (like, 192.168.200.230/24 and 192.168.200.231/24)


In my experience setting up to interfaces on the same subnet means that
your ssystem doesnt know which one to route traffic through...







On 1 June 2018 at 09:01, Wladimir Mutel  wrote:


Dear all,

I am experimenting with Ceph setup. I set up a single node
(Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA
HDDs,
Ubuntu 18.04 Bionic, Ceph packages from
http://download.ceph.com/debian-luminous/dists/xenial/
<http://download.ceph.com/debian-luminous/dists/xenial/>
and iscsi parts built manually per
http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-instal
l/
<http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/> )
Also i changed 'chooseleaf ... host' into 'chooseleaf ... osd'
in the CRUSH map to run with single host.

I have both its Ethernets connected to the same LAN,
with different IPs in the same subnet
(like, 192.168.200.230/24 and 192.168.200.231/24)
mon_host in ceph.conf is set to 192.168.200.230,
and ceph daemons (mgr, mon, osd) are listening to this IP.

What I would like to finally achieve, is to provide
multipath
iSCSI access through both these Ethernets to Ceph RBDs,
and apparently, gwcli does not allow me to add a second
gateway to the same target. It is going like this :

/iscsi-target> create iqn.2018-06.host.test:test
ok
/iscsi-target> cd iqn.2018-06.host.test:test/gateways
/iscsi-target...test/gateways> create p10s 192.168.200.230
skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok
/iscsi-target...test/gateways> create p10s2 192.168.200.231
skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Gateway creation failed, gateway(s)
unavailable:192.168.200.231(UNKNOWN state)

host names are defined in /etc/hosts as follows :

192.168.200.230 p10s
192.168.200.231 p10s2

so I suppose that something does not listen on
192.168.200.231, but I don't have an idea what is that thing and how to
make it listen there. Or how to achieve this goal (utilization of both
Ethernets for iSCSI) in different way. Shoud I aggregate Ethernets into
a 'bond' interface with single IP ? Should I build and use 'lrbd' tool
instead of 'gwcli' ? Is it acceptable that I run kernel 4.15, not 4.16+
?
What other directions could you give me on this task ?
Thanks in advance for your replies.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
--
Panayiotis Gotsis
Systems & Services Engineer
Network Operations Center
GRNET - Networking Research and Education
7, Kifisias Av., 115 23, Athens
t: +30 210 7471091 | f: +30 210 7474490

Follow us: www.grnet.gr
Twitter: @grnet_gr |Facebook: @grnet.gr
LinkedIn: grnet |YouTube: GRNET EDET
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Marc Roos
 

Indeed, you have to add routes and rules to routing table. Just bond 
them.


-Original Message-
From: John Hearns [mailto:hear...@googlemail.com] 
Sent: vrijdag 1 juni 2018 10:00
To: ceph-users
Subject: Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - 
how to ?

Errr   is this very wise ?

I have both its Ethernets connected to the same LAN,
with different IPs in the same subnet
(like, 192.168.200.230/24 and 192.168.200.231/24)
 

In my experience setting up to interfaces on the same subnet means that 
your ssystem doesnt know which one to route traffic through...







On 1 June 2018 at 09:01, Wladimir Mutel  wrote:


Dear all,

I am experimenting with Ceph setup. I set up a single node
(Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA 
HDDs,
Ubuntu 18.04 Bionic, Ceph packages from
http://download.ceph.com/debian-luminous/dists/xenial/ 
<http://download.ceph.com/debian-luminous/dists/xenial/> 
and iscsi parts built manually per
http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-instal
l/ 
<http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/> )
Also i changed 'chooseleaf ... host' into 'chooseleaf ... osd'
in the CRUSH map to run with single host.

I have both its Ethernets connected to the same LAN,
with different IPs in the same subnet
(like, 192.168.200.230/24 and 192.168.200.231/24)
mon_host in ceph.conf is set to 192.168.200.230,
and ceph daemons (mgr, mon, osd) are listening to this IP.

What I would like to finally achieve, is to provide 
multipath
iSCSI access through both these Ethernets to Ceph RBDs,
and apparently, gwcli does not allow me to add a second
gateway to the same target. It is going like this :

/iscsi-target> create iqn.2018-06.host.test:test
ok
/iscsi-target> cd iqn.2018-06.host.test:test/gateways
/iscsi-target...test/gateways> create p10s 192.168.200.230 
skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
ok
/iscsi-target...test/gateways> create p10s2 192.168.200.231 
skipchecks=true
OS version/package checks have been bypassed
Adding gateway, sync'ing 0 disk(s) and 0 client(s)
Failed : Gateway creation failed, gateway(s) 
unavailable:192.168.200.231(UNKNOWN state)

host names are defined in /etc/hosts as follows :

192.168.200.230 p10s
192.168.200.231 p10s2

so I suppose that something does not listen on 
192.168.200.231, but I don't have an idea what is that thing and how to 
make it listen there. Or how to achieve this goal (utilization of both 
Ethernets for iSCSI) in different way. Shoud I aggregate Ethernets into 
a 'bond' interface with single IP ? Should I build and use 'lrbd' tool 
instead of 'gwcli' ? Is it acceptable that I run kernel 4.15, not 4.16+ 
?
What other directions could you give me on this task ?
Thanks in advance for your replies.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> 




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread John Hearns
Errr   is this very wise ?

I have both its Ethernets connected to the same LAN,
with different IPs in the same subnet
(like, 192.168.200.230/24 and 192.168.200.231/24)


In my experience setting up to interfaces on the same subnet means that
your ssystem doesnt know which one to route traffic through...






On 1 June 2018 at 09:01, Wladimir Mutel  wrote:

> Dear all,
>
> I am experimenting with Ceph setup. I set up a single node
> (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs,
> Ubuntu 18.04 Bionic, Ceph packages from
> http://download.ceph.com/debian-luminous/dists/xenial/
> and iscsi parts built manually per
> http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/)
> Also i changed 'chooseleaf ... host' into 'chooseleaf ... osd'
> in the CRUSH map to run with single host.
>
> I have both its Ethernets connected to the same LAN,
> with different IPs in the same subnet
> (like, 192.168.200.230/24 and 192.168.200.231/24)
> mon_host in ceph.conf is set to 192.168.200.230,
> and ceph daemons (mgr, mon, osd) are listening to this IP.
>
> What I would like to finally achieve, is to provide multipath
> iSCSI access through both these Ethernets to Ceph RBDs,
> and apparently, gwcli does not allow me to add a second
> gateway to the same target. It is going like this :
>
> /iscsi-target> create iqn.2018-06.host.test:test
> ok
> /iscsi-target> cd iqn.2018-06.host.test:test/gateways
> /iscsi-target...test/gateways> create p10s 192.168.200.230 skipchecks=true
> OS version/package checks have been bypassed
> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
> ok
> /iscsi-target...test/gateways> create p10s2 192.168.200.231 skipchecks=true
> OS version/package checks have been bypassed
> Adding gateway, sync'ing 0 disk(s) and 0 client(s)
> Failed : Gateway creation failed, gateway(s) 
> unavailable:192.168.200.231(UNKNOWN
> state)
>
> host names are defined in /etc/hosts as follows :
>
> 192.168.200.230 p10s
> 192.168.200.231 p10s2
>
> so I suppose that something does not listen on 192.168.200.231,
> but I don't have an idea what is that thing and how to make it listen
> there. Or how to achieve this goal (utilization of both Ethernets for
> iSCSI) in different way. Shoud I aggregate Ethernets into a 'bond'
> interface with single IP ? Should I build and use 'lrbd' tool instead of
> 'gwcli' ? Is it acceptable that I run kernel 4.15, not 4.16+ ?
> What other directions could you give me on this task ?
> Thanks in advance for your replies.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com