[Openstack-operators] Question about path in disk file

2018-11-07 Thread Michael Stang
Hi,

I have a question about the "disk" file from the instances, I searched about it 
but didn't find any infos.

At the beginning of the file the is a path to the baseimage which was used to 
create the instance, and I want to knwo what this ist for?

The problem behind is, that we changed the path to the imagecache and some 
instances denied to start after becaus they could not find the base image 
anymore. But the other instances didn't have this problem at all. Also I 
noticed that the info in the disk file ist not updated when the instance is 
migrated to another host.

So my question is this normal? What is when I migrate the sever to another host 
with another storage and there is another path to the image cahche?

We use still mitaka at the moment, maybe in a newer version this has already 
changed?

Thank you :)

Kind regards,

Michael



Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
mailto:michael.stang@dhbw-mannheim.dehttp://www.dhbw-mannheim.de/


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Connecting 2 OpenStack clouds

2016-09-30 Thread Michael Stang
Thank you for the links, I wil lalso have a look at these ones :-)
 
Seems also very advanced to use for me to implement, but it also looks very
interesting, maybe a good firendship with the members of the "Large Deployment
Team" would be a really good ides ;-)
 
Thank you and have a nice weekend.
 
Kind regards,
Michael

> Tom Fifield <t...@openstack.org> hat am 30. September 2016 um 09:52
> geschrieben:
>
>
> On 30/09/16 14:06, Michael Stang wrote:
> > Hello all,
> >
> > I have a question if it is possible to connect 2 Openstack clouds to use
> > the ressources together?
> >
> > We have a Mitaka installation at our site and we have colleagues who
> > also have a mitaka installaton at their site which are both independent
> > at the moment. We want to create a site-to-site vpn tunnel between our 2
> > management networks (with openvpn) so that both installations can see
> > each other, and we searching now for a possibility to connect both together.
> >
> > Is there already some way to connect both controllers so the users of
> > the one site can also use the resources of the other site and start
> > instances there from their controller on the other controller?
> >
> > How is this done in large installations when 2 clouds should be
> > connected to each other? Is this even possible?
>
> Quick overview of various ways to do related things here:
>
> http://docs.openstack.org/ops-guide/arch-scaling.html#segregating-your-cloud
>
>
> In short, you can run up another nova-api running nova-cells to be a
> "parent" of the two installations. Then your users (and horizon) can
> connect to that and see all the resources.
>
> http://docs.openstack.org/mitaka/config-reference/compute/cells.html
>
>
> Pretty advanced usage though, so you might want to become fast friends
> with the "Large Deployment Team", many of whom run this configuration :)
>
>
> https://wiki.openstack.org/wiki/Large_Deployment_Team
>
> _______
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Connecting 2 OpenStack clouds

2016-09-30 Thread Michael Stang
Hello Saverio,
 
thank you for the links. I think Federation is the one we want to do (if
possible), because we have two independent clouds we want to share which each
other for use, so Federation might be the way we can go.
 
I will have a look at the documents about it.
 
Thank you and have a nice weekend.
 
Kind regards,
Michael

> Saverio Proto <ziopr...@gmail.com> hat am 30. September 2016 um 09:27
> geschrieben:
>
>
> Hello,
>
> If your setup has a single user database, so all users are under the
> same administrative domain, what you describe is like having different
> Openstack Regions, or different Nova Cells.
> I would suggest to look into Multi Region that is the easier to implement.
> http://docs.openstack.org/arch-design/multi-site-architecture.html
>
> If your setup has users in the two clouds that are managed under two
> different administrative domains, then what you are talking about is
> Cloud Federation.
>
> We had a discussion about it in Machester:
> https://etherpad.openstack.org/p/MAN-ops-Keystone-and-Federation
>
> If your university is interested in Federation you should be aware of
> this workshop:
> https://eventr.geant.org/events/2527
>
> Cheers,
>
> Saverio
>
>
>
> 2016-09-30 8:06 GMT+02:00 Michael Stang <michael.st...@dhbw-mannheim.de>:
> > Hello all,
> >
> > I have a question if it is possible to connect 2 Openstack clouds to use the
> > ressources together?
> >
> > We have a Mitaka installation at our site and we have colleagues who also
> > have a mitaka installaton at their site which are both independent at the
> > moment. We want to create a site-to-site vpn tunnel between our 2 management
> > networks (with openvpn) so that both installations can see each other, and
> > we searching now for a possibility to connect both together.
> >
> > Is there already some way to connect both controllers so the users of the
> > one site can also use the resources of the other site and start instances
> > there from their controller on the other controller?
> >
> > How is this done in large installations when 2 clouds should be connected
> > to each other? Is this even possible?
> >
> > Thank you and kind regards,
> > Michael
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Connecting 2 OpenStack clouds

2016-09-30 Thread Michael Stang
Hello all,
 
I have a question if it is possible to connect 2 Openstack clouds to use the
ressources together?
 
We have a Mitaka installation at our site and we have colleagues who also have a
mitaka installaton at their site which are both independent at the moment. We
want to create a site-to-site vpn tunnel between our 2 management networks (with
openvpn) so that both installations can see each other, and we searching now for
a possibility to connect both together.
 
Is there already some way to connect both controllers so the users of the one
site can also use the resources of the other site and start instances there from
their controller on the other controller?
 
How is this done in  large installations when 2 clouds should be connected to
each other? Is this even possible?
 
Thank you and kind regards,
Michael___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mitaka live snapshot of instances not working

2016-08-26 Thread Michael Stang
Hi,
 
thank you for the link. I tried this method and get the following result:
 
 


virsh dumpxml --inactive instance-0367 > /var/tmp/instance-0367.xml

virsh blockjob instance-0367 vda --abort
error: Requested operation is not valid: No active operation on device:
drive-virtio-disk0

virsh blockcopy --domain instance-0367 vda
/var/tmp/instance-0367-copy.qcow2 --wait -- verbose
error: internal error: unable to execute QEMU command 'drive-mirror': Could not
create file: Permission denied

virsh blockjob instance-0367 vda --abort
error: Requested operation is not valid: No active operation on device:
drive-virtio-disk0

virsh define /var/tmp/instance-0367.xml
Domain instance-0367 defined from /var/tmp/instance-0367.xml

 


The last command I ccould not do, because I got no image with the 3rd command. I
tried it as user and also as root, also tired different directories to write the
image to (//var/tmp/, /tmp, ~/ )

 

Kind regards,

Michael

> kostiantyn.volenbovs...@swisscom.com hat am 25. August 2016 um 14:27
> geschrieben:
> 
> 
>  Hi,
> 
>   
> 
>  In my previous mail I have indicated the link to website of Kashyap that
> provides the sequence for cold snapshot and not live snapshot.
> 
>   
> 
>  Could you try the sequence specified in comment ‘Kashyap Chamarthy (kashyapc)
> https://launchpad.net/~kashyapc wrote on 2014-06-27’ in [1] ?
> 
>  Libvirt API equivalent of virsh managedsave is not something that is used in
> live snapshot according to that (I haven’t checked source code myself)
> 
>   
> 
>  BR,
> 
>  Konstantin
> 
>  [1] https://bugs.launchpad.net/nova/+bug/1334398
> 
>   
> 
>   
> 
>  From: Michael Stang [mailto:michael.st...@dhbw-mannheim.de]
>  Sent: Thursday, August 25, 2016 8:48 AM
>  To: Volenbovskyi Kostiantyn, INI-ON-FIT-CXD-ELC
> <kostiantyn.volenbovs...@swisscom.com>; Saverio Proto <ziopr...@gmail.com>
>  Cc: openstack-operators@lists.openstack.org
>  Subject: RE: [Openstack-operators] Mitaka live snapshot of instances not
> working
> 
>   
> 
>  Hi Konstantin, hi Saverio
> 
>   
> 
>  thank you for your answers.
> 
>   
> 
>  I checked the version, these are
> 
>   
> 
>  libvirt 1.3.1-1ubuntu10.1~cloud0
> 
>  qemu  1:2.5+dfsg-5ubuntu10.2~cloud0
> 
>   
> 
>  at our installation, system is Ubuntu 14.04.
> 
>   
> 
>   
> 
>  I tried also the following from [2]
> 
>   
> 
>  Command: 
> 
>  nova image-create test "snap_of_test" --poll
> 
>  Result: Server snapshotting... 25% completeERROR (NotFound): Image not found.
> (HTTP 404)
> 
>   
> 
>   
> 
>  Then I started trying step by step as in [2] but failed at the first step
> already:
> 
>   
> 
>  Command: 
> 
>  virsh managedsave instance-0367
> 
>  Result:
> 
>  error: Failed to save domain instance-0367 state
>  error: internal error: unable to execute QEMU command 'migrate': Migration
> disabled: failed to allocate shared memory
> 
>   
> 
>  I also checked on the compute nodes the directories:
> 
>  /var/lib/libvirt/qemu/save/
>  /var/lib/nova/instances/snapshots/
> 
>  there is 257G free space and the instance only has 1GB root disk, so I think
> its not missing space.
> 
>   
> 
>  So is this maybe a problem with qemu? How can i enable 'migrate' and why is
> it disabled?
> 
>   
> 
>  Thank you for your help.
> 
>   
> 
>  Kind regards,
>  Michael
> 
>   
> 
>   
> 
>   
> 
>   
> 
> 
>  > kostiantyn.volenbovs...@swisscom.com
>  > mailto:kostiantyn.volenbovs...@swisscom.com hat am 24. August 2016 um 14:51
>  > geschrieben:
>  >
>  >
>  > Hi,
>  > extract from [1] ((side note: I couldn't find that in config reference for
>  > Mitaka) is:
>  > "disable_libvirt_livesnapshot = True
>  > (BoolOpt) When using libvirt 1.2.2 live snapshots fail intermittently under
>  > load. This config option provides a mechanism to enable live snapshot while
>  > this is resolved. See https://bugs.launchpad.net/nova/+bug/1334398
>  > https://bugs.launchpad.net/nova/+bug/1334398 "
>  >
>  > I am not sure if Nova behaves like that in case you have
>  > disable_libvirt_livesnapshot=True (default in Liberty and Mitaka
>  > apparently...)
>  > In case it is not about that, then I would try to do it manually using
>  > something like [2] as guideline to see if it succeeds using Libvirt/QEMU
>  > without Nova.
>  >
>  > BR,
>  > Konstantin
>  > [1]
>  > 
> http://docs.openstack.org/liberty/config-reference/content/list-of-compute-con

Re: [Openstack-operators] Mitaka live snapshot of instances not working

2016-08-25 Thread Michael Stang
Hi Konstantin, hi Saverio
 
thank you for your answers.
 
I checked the version, these are
 
libvirt 1.3.1-1ubuntu10.1~cloud0
qemu  1:2.5+dfsg-5ubuntu10.2~cloud0
 
at our installation, system is Ubuntu 14.04.
 
 
I tried also the following from [2]
 
Command: 

nova image-create test "snap_of_test" --poll

Result: Server snapshotting... 25% completeERROR (NotFound): Image not found.
(HTTP 404)
 
 
Then I started trying step by step as in [2] but failed at the first step
already:
 
Command: 

virsh managedsave instance-0367

Result:

error: Failed to save domain instance-0367 state
error: internal error: unable to execute QEMU command 'migrate': Migration
disabled: failed to allocate shared memory

 

I also checked on the compute nodes the directories:

/var/lib/libvirt/qemu/save/
/var/lib/nova/instances/snapshots/

there is 257G free space and the instance only has 1GB root disk, so I think its
not missing space.

 

So is this maybe a problem with qemu? How can i enable 'migrate' and why is it
disabled?

 

Thank you for your help.

 

Kind regards,
Michael

 

 

 
 

> kostiantyn.volenbovs...@swisscom.com hat am 24. August 2016 um 14:51
> geschrieben:
>
>
> Hi,
> extract from [1] ((side note: I couldn't find that in config reference for
> Mitaka) is:
> "disable_libvirt_livesnapshot = True
> (BoolOpt) When using libvirt 1.2.2 live snapshots fail intermittently under
> load. This config option provides a mechanism to enable live snapshot while
> this is resolved. See https://bugs.launchpad.net/nova/+bug/1334398;
>
> I am not sure if Nova behaves like that in case you have
> disable_libvirt_livesnapshot=True (default in Liberty and Mitaka
> apparently...)
> In case it is not about that, then I would try to do it manually using
> something like [2] as guideline to see if it succeeds using Libvirt/QEMU
> without Nova.
>
> BR,
> Konstantin
> [1]
> http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html
> [2]
> https://kashyapc.com/2013/03/11/openstack-nova-image-create-under-the-hood/
>
>
>
>
>
> From: Michael Stang [mailto:michael.st...@dhbw-mannheim.de]
> Sent: Wednesday, August 24, 2016 9:55 AM
> To: openstack-operators <openstack-operators@lists.openstack.org>
> Subject: [Openstack-operators] Mitaka live snapshot of instances not working
>
> Hi all,
>  
> we have a problem in our new mitaka installation, it seems that it is not
> possible to do a snapshot of a running instance (normal instance without an
> attached volume). When with try to do a snapshot we get a success message, but
> the snapshot is only showed shortly in the imagelist with status deleted. I we
> shutoff the instance and then do a snapshot it is working without problems.
>  
> When we use a cinder volume as root disk instead an ephermal root disk a
> volume snapshot could be made without problem in running state of the
> instance.
>  
> The same behaviour we have on an other installation of mitaka from our
> colleagues.
>  
> Is this behaviour normal in Mitaka or ist this maybe a bug? Because in Juno we
> could do snapshots from running instances  without problems.
>  
>  
> Regards,
> Michael
>  
>
>
> Michael Stang
> Laboringenieur, Dipl. Inf. (FH)
>
> Duale Hochschule Baden-Württemberg Mannheim
> Baden-Wuerttemberg Cooperative State University Mannheim
> ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
> Fachbereich Informatik, Fakultät Technik
> Coblitzallee 1-9
> 68163 Mannheim
>
>
> michael.st...@dhbw-mannheim.de
> http://www.dhbw-mannheim.de
>
>
>
>
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Mitaka live snapshot of instances not working

2016-08-24 Thread Michael Stang
Hi all,
 
we have a problem in our new mitaka installation, it seems that it is not
possible to do a snapshot of a running instance (normal instance without an
attached volume). When with try to do a snapshot we get a success message, but
the snapshot is only showed shortly in the imagelist with status deleted. I we
shutoff the instance and then do a snapshot it is working without problems.
 
When we use a cinder volume as root disk instead an ephermal root disk a volume
snapshot could be made without problem in running state of the instance.
 
The same behaviour we have on an other installation of mitaka from our
colleagues.
 
Is this behaviour normal in Mitaka or ist this maybe a bug? Because in Juno we
could do snapshots from running instances  without problems.
 
 
Regards,
Michael
 


Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim


michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Buffer I/O error on cinder volumes

2016-07-27 Thread Michael Stang
Hi all,
 
we found the problem, seems to be an issue in the kernel we had (Ubuntu 14.04.3,
3.19.x) with iscsi or multipath, think rather with the iscsi, maybe only in our
specific konfiguration(?). Seems also that the kernel 4.0.x also suffers from
this, 4.2.x I have not tested. We gone now back to 3.16.x and now the errors are
gone and everything works as it should.
 
We usesd 14.04.3 becaus later Version (14.04.4 and 16.04.0(1) suffers from a
kernel bug for our hardware scsi controller (HP BL465c G8).
 
So the newest OS/Kernel Version is not always the best I think ;-)
 
Kind regards,
Michael
 

> Michael Stang <michael.st...@dhbw-mannheim.de> hat am 26. Juli 2016 um 18:28
> geschrieben:
> 
>  Hi all,
>   
>  we got a strange problem on our new mitaka installation. We have this
> messages in the syslog on the block storage node:
>   
> 
>  Jul 25 09:10:33 block1 tgtd: device_mgmt(246) sz:69
> params:path=/dev/cinder-volumes/volume-41d6c674-1d0d-471d-ad7d-07e9fab5c90d
>  Jul 25 09:10:33 block1 tgtd: bs_thread_open(412) 16
>  Jul 25 09:10:55 block1 kernel: [1471887.006569] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.006585] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.006589] sd 5:0:0:0: [sdc] Add. Sense:
> Invalid field in cdb
>  Jul 25 09:10:55 block1 kernel: [1471887.006590] sd 5:0:0:0: [sdc] CDB:
>  Jul 25 09:10:55 block1 kernel: [1471887.006593] Write(16): 8a 00 00 00 00 00
> 00 1c d0 00 00 00 40 00 00 00
>  Jul 25 09:10:55 block1 kernel: [1471887.006603] blk_update_request: critical
> target error, dev sdc, sector 1888256
>  Jul 25 09:10:55 block1 kernel: [1471887.025141] blk_update_request: critical
> target error, dev dm-0, sector 1888256
>  Jul 25 09:10:55 block1 kernel: [1471887.043979] buffer_io_error: 6695
> callbacks suppressed
>  Jul 25 09:10:55 block1 kernel: [1471887.043981] Buffer I/O error on dev dm-1,
> logical block 235776, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.063894] Buffer I/O error on dev dm-1,
> logical block 235777, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.082592] Buffer I/O error on dev dm-1,
> logical block 235778, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.100903] Buffer I/O error on dev dm-1,
> logical block 235779, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.119625] Buffer I/O error on dev dm-1,
> logical block 235780, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.138360] Buffer I/O error on dev dm-1,
> logical block 235781, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.157247] Buffer I/O error on dev dm-1,
> logical block 235782, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.175086] Buffer I/O error on dev dm-1,
> logical block 235783, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.193637] Buffer I/O error on dev dm-1,
> logical block 235784, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.212358] Buffer I/O error on dev dm-1,
> logical block 235785, lost async page write
>  Jul 25 09:10:55 block1 kernel: [1471887.232830] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.232833] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.232836] sd 5:0:0:0: [sdc] Add. Sense:
> Invalid field in cdb
>  Jul 25 09:10:55 block1 kernel: [1471887.232837] sd 5:0:0:0: [sdc] CDB:
>  Jul 25 09:10:55 block1 kernel: [1471887.232839] Write(16): 8a 00 00 00 00 00
> 00 1d 10 00 00 00 40 00 00 00
>  Jul 25 09:10:55 block1 kernel: [1471887.232847] blk_update_request: critical
> target error, dev sdc, sector 1904640
>  Jul 25 09:10:55 block1 kernel: [1471887.251046] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.251049] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.251052] sd 5:0:0:0: [sdc] Add. Sense:
> Invalid field in cdb
>  Jul 25 09:10:55 block1 kernel: [1471887.251053] sd 5:0:0:0: [sdc] CDB:
>  Jul 25 09:10:55 block1 kernel: [1471887.251054] Write(16): 8a 00 00 00 00 00
> 00 1d 50 00 00 00 40 00 00 00
>  Jul 25 09:10:55 block1 kernel: [1471887.251062] blk_update_request: critical
> target error, dev sdc, sector 1921024
>  Jul 25 09:10:55 block1 kernel: [1471887.269726] sd 5:0:0:0: [sdc] FAILED
> Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
>  Jul 25 09:10:55 block1 kernel: [1471887.269729] sd 5:0:0:0: [sdc] Sense Key :
> Illegal Request [current]
>  Jul 25 09:10:55 block1 kernel: [1471887.269732] sd 5:0:0:0: [sdc] Add. 

[Openstack-operators] Buffer I/O error on cinder volumes

2016-07-26 Thread Michael Stang
Hi all,
 
we got a strange problem on our new mitaka installation. We have this messages
in the syslog on the block storage node:
 

Jul 25 09:10:33 block1 tgtd: device_mgmt(246) sz:69
params:path=/dev/cinder-volumes/volume-41d6c674-1d0d-471d-ad7d-07e9fab5c90d
Jul 25 09:10:33 block1 tgtd: bs_thread_open(412) 16
Jul 25 09:10:55 block1 kernel: [1471887.006569] sd 5:0:0:0: [sdc] FAILED Result:
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Jul 25 09:10:55 block1 kernel: [1471887.006585] sd 5:0:0:0: [sdc] Sense Key :
Illegal Request [current]
Jul 25 09:10:55 block1 kernel: [1471887.006589] sd 5:0:0:0: [sdc] Add. Sense:
Invalid field in cdb
Jul 25 09:10:55 block1 kernel: [1471887.006590] sd 5:0:0:0: [sdc] CDB:
Jul 25 09:10:55 block1 kernel: [1471887.006593] Write(16): 8a 00 00 00 00 00 00
1c d0 00 00 00 40 00 00 00
Jul 25 09:10:55 block1 kernel: [1471887.006603] blk_update_request: critical
target error, dev sdc, sector 1888256
Jul 25 09:10:55 block1 kernel: [1471887.025141] blk_update_request: critical
target error, dev dm-0, sector 1888256
Jul 25 09:10:55 block1 kernel: [1471887.043979] buffer_io_error: 6695 callbacks
suppressed
Jul 25 09:10:55 block1 kernel: [1471887.043981] Buffer I/O error on dev dm-1,
logical block 235776, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.063894] Buffer I/O error on dev dm-1,
logical block 235777, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.082592] Buffer I/O error on dev dm-1,
logical block 235778, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.100903] Buffer I/O error on dev dm-1,
logical block 235779, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.119625] Buffer I/O error on dev dm-1,
logical block 235780, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.138360] Buffer I/O error on dev dm-1,
logical block 235781, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.157247] Buffer I/O error on dev dm-1,
logical block 235782, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.175086] Buffer I/O error on dev dm-1,
logical block 235783, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.193637] Buffer I/O error on dev dm-1,
logical block 235784, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.212358] Buffer I/O error on dev dm-1,
logical block 235785, lost async page write
Jul 25 09:10:55 block1 kernel: [1471887.232830] sd 5:0:0:0: [sdc] FAILED Result:
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Jul 25 09:10:55 block1 kernel: [1471887.232833] sd 5:0:0:0: [sdc] Sense Key :
Illegal Request [current]
Jul 25 09:10:55 block1 kernel: [1471887.232836] sd 5:0:0:0: [sdc] Add. Sense:
Invalid field in cdb
Jul 25 09:10:55 block1 kernel: [1471887.232837] sd 5:0:0:0: [sdc] CDB:
Jul 25 09:10:55 block1 kernel: [1471887.232839] Write(16): 8a 00 00 00 00 00 00
1d 10 00 00 00 40 00 00 00
Jul 25 09:10:55 block1 kernel: [1471887.232847] blk_update_request: critical
target error, dev sdc, sector 1904640
Jul 25 09:10:55 block1 kernel: [1471887.251046] sd 5:0:0:0: [sdc] FAILED Result:
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Jul 25 09:10:55 block1 kernel: [1471887.251049] sd 5:0:0:0: [sdc] Sense Key :
Illegal Request [current]
Jul 25 09:10:55 block1 kernel: [1471887.251052] sd 5:0:0:0: [sdc] Add. Sense:
Invalid field in cdb
Jul 25 09:10:55 block1 kernel: [1471887.251053] sd 5:0:0:0: [sdc] CDB:
Jul 25 09:10:55 block1 kernel: [1471887.251054] Write(16): 8a 00 00 00 00 00 00
1d 50 00 00 00 40 00 00 00
Jul 25 09:10:55 block1 kernel: [1471887.251062] blk_update_request: critical
target error, dev sdc, sector 1921024
Jul 25 09:10:55 block1 kernel: [1471887.269726] sd 5:0:0:0: [sdc] FAILED Result:
hostbyte=DID_OK driverbyte=DRIVER_SENSE
Jul 25 09:10:55 block1 kernel: [1471887.269729] sd 5:0:0:0: [sdc] Sense Key :
Illegal Request [current]
Jul 25 09:10:55 block1 kernel: [1471887.269732] sd 5:0:0:0: [sdc] Add. Sense:
Invalid field in cdb
Jul 25 09:10:55 block1 kernel: [1471887.269733] sd 5:0:0:0: [sdc] CDB:
Jul 25 09:10:55 block1 kernel: [1471887.269735] Write(16): 8a 00 00 00 00 00 00
1d 90 00 00 00 11 88 00 00
Jul 25 09:10:55 block1 kernel: [1471887.269744] blk_update_request: critical
target error, dev sdc, sector 1937408
Jul 25 09:10:55 block1 kernel: [1471887.287739] blk_update_request: critical
target error, dev dm-0, sector 1904640
Jul 25 09:10:55 block1 kernel: [1471887.309002] blk_update_request: critical
target error, dev dm-0, sector 1921024
Jul 25 09:10:55 block1 kernel: [1471887.330162] blk_update_request: critical
target error, dev dm-0, sector 1937408
Jul 25 09:10:55 block1 tgtd: bs_rdwr_request(370) io error 0x9a87c0 35 0 0 0,
Input/output error
Jul 25 09:11:50 block1 tgtd: conn_close(103) connection closed, 0x9a7dc0 1
Jul 25 09:11:50 block1 tgtd: conn_close(109) session 0x9a72c0 1

 

bit before we had this messages also on teh compute and the object nodes.

We use iscsi storage (HP MSA 2040) over multipathd (4 paths, sdb,sdc,sdd, sde)
on alle nodes on the compute nodes we have ocfs2 

Re: [Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-20 Thread Michael Stang
Hi Lucas,
 
glad it worked for you :-)
 
The problem with the images which I couldn't delete is solved now, i had this
line hardcoded before in the configuration file because I got the error before
that it did not find the swift endpoint:
 
swift_store_endpoint = http://controller:8080/v1/
 
but this was related to that he could not authenticate becaus of the missing
IDs. The configuration with this option was working for storing images but I
think for the delete it needs another endpoint (maybe this one?
 http://controller:8080/v1/AUTH_%(tenant_id)s ). but after I removed the line
everything works fine now and the line is not needed anymore because it gets the
rigth endpoint itself now after it can authenticate itself ;-)
 
Thanks and kind regards,
Michael
 

> Lucas Di Paola <ldipaola.despe...@gmail.com> hat am 20. Juli 2016 um 16:46
> geschrieben:
> 
>  Hi Michael, 
>   
>  Adjusting some config options solved the problem. Thank you, I really
> appreciated it. 
>   
>  What roles do the users have? The ones you are trying to delete the images
> from glance? 
>   
>  Regards, 
>   
>  Lucas.-
> 
>  2016-07-20 3:01 GMT-03:00 Michael Stang <michael.st...@dhbw-mannheim.de
> mailto:michael.st...@dhbw-mannheim.de >:
>> >Hi Lucas,
> > 
> >yes I used thisID from the command for both, also have the proxy at 8080
> > on the controller. Here is my final configuration I made maybe this helps
> > you?
> > 
> >In the glance-api.conf:
> > 
> > 
> >[glance_store]
> >stores = file,http,swift
> >default_store = swift
> >filesystem_store_datadir = /var/lib/glance/images/
> >swift_store_create_container_on_put = True
> >swift_store_container = glance
> >swift_store_large_object_size = 20480
> >swift_store_large_object_chunk_size = 200
> >swift_enable_snet = False
> >default_swift_reference = ref1
> >swift_store_config_file = /etc/glance/glance-swift-store.conf
> > 
> > 
> >and in the glance-swift-store.conf:
> > 
> > 
> >[ref1]
> >user_domain_id = 
> >project_domain_id = 
> >auth_version = 3
> >auth_address = http://controller:5000/v3
> >user = service:glance
> >key = 
> > 
> > 
> > 
> > > > > 
> > >> > 
> >Kind regards,
> > 
> >Michael
> > 
> > 
> > 
> > 
> > 
> > > > > Lucas Di Paola <ldipaola.despe...@gmail.com
> > > > > mailto:ldipaola.despe...@gmail.com > hat am 19. Juli 2016 um 22:49
> > > > > geschrieben:
> > > Hi Michael, 
> > >  
> > > I am having exactly the same issue, getting the error that you
> > > mentioned in your first email. I tried replacing project_domain_id´s and
> > > user_domain´s value using the , but I had no luck, still
> > > getting the same error. 
> > > Did you add the ID obtained from running the following openstack
> > > command? "openstack domain list". In that case, for both parameters, did
> > > you use the same ID? 
> > >  
> > > 
> > >
> > > +--+-+-+----+
> > > | ID   | Name| Enabled | Description
> > >|
> > > 
> > >
> > > +--+-+-++
> > > |  | default | True| Default Domain |
> > > 
> > >
> > > +--+-+-++
> > >  
> > > Regarding your issue, do you have the Proxy Server listenting on the
> > > 8080 port on the controller node?
> > >  
> > > 
> > > 2016-07-19 11:21 GMT-03:00 Michael Stang
> > > <michael.st...@dhbw-mannheim.de mailto:michael.st...@dhbw-mannheim.de >:
> > >   > > > >   Hi Sam, Hi all,
> > > >
> > > >   fixed the problem.  I had used
> > > >
> > > >   project_domain_id = default
> > > >   user_domain_id = default
> > > >
> > > >   instead I had to use
> > > >    
> > > >   project_domain_id = 
> > > >   user_domain_id = 
> > > >
> > > >   to make it work. Now I can store images over glance in swift, the
> > > > only problem I now have is

Re: [Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-20 Thread Michael Stang
Hi Lucas,
 
yes I used thisID from the command for both, also have the proxy at 8080 on the
controller. Here is my final configuration I made maybe this helps you?
 
In the glance-api.conf:
 

[glance_store]
stores = file,http,swift
default_store = swift
filesystem_store_datadir = /var/lib/glance/images/
swift_store_create_container_on_put = True
swift_store_container = glance
swift_store_large_object_size = 20480
swift_store_large_object_chunk_size = 200
swift_enable_snet = False
default_swift_reference = ref1
swift_store_config_file = /etc/glance/glance-swift-store.conf

 
and in the glance-swift-store.conf:
 

[ref1]
user_domain_id = 
project_domain_id = 
auth_version = 3
auth_address = http://controller:5000/v3
user = service:glance
key = 



> 

Kind regards,

Michael

 

 

> Lucas Di Paola <ldipaola.despe...@gmail.com> hat am 19. Juli 2016 um 22:49
> geschrieben:
>  Hi Michael, 
>   
>  I am having exactly the same issue, getting the error that you mentioned in
> your first email. I tried replacing project_domain_id´s and user_domain´s
> value using the , but I had no luck, still getting the same error. 
>  Did you add the ID obtained from running the following openstack command?
> "openstack domain list". In that case, for both parameters, did you use the
> same ID? 
>   
>  +--+-+-++
>  | ID   | Name| Enabled | Description|
>  +--+-+-++
>  |  | default | True| Default Domain |
>  +--+-+-++
>   
>  Regarding your issue, do you have the Proxy Server listenting on the 8080
> port on the controller node?
>   
> 
>  2016-07-19 11:21 GMT-03:00 Michael Stang <michael.st...@dhbw-mannheim.de
> mailto:michael.st...@dhbw-mannheim.de >:
>> >Hi Sam, Hi all,
> > 
> >fixed the problem.  I had used
> > 
> >project_domain_id = default
> >user_domain_id = default
> > 
> >instead I had to use
> > 
> >project_domain_id = 
> >user_domain_id = 
> > 
> >to make it work. Now I can store images over glance in swift, the only
> > problem I now have is that every user can upload images but only "glance"
> > can delete the images, when I try this with another user in the project then
> > in the glance-api.log I get
> > 
> > ClientException: Object HEAD failed:
> > http://controller:8080/v1/glance/  403 Forbidden
> > 
> > 
> >don't know if something is still wrong or ift this might be a bug?
> > 
> > 
> >Kind regards,
> >Michael
> > 
> > 
> > 
> > 
> > 
> > > > > Michael Stang <michael.st...@dhbw-mannheim.de
> > > > > mailto:michael.st...@dhbw-mannheim.de > hat am 18. Juli 2016 um
> > > > > 08:44 geschrieben:
> > > 
> > > 
> > > Hi Sam,
> > >  
> > > thank you for your answer.
> > >  
> > > I had a look, the swift store endpoint ist listet 3 times  in the
> > > keystone, publicurl admin and internal endpoint. To try, I also set it in
> > > the glance-api.conf:
> > >  
> > > swift_store_endpoint = http://controller:8080/v1/
> > >  
> > > I also tried
> > >  
> > > swift_store_endpoint = http://controller:8080/v1/AUTH_%(tenant_id)s
> > >  
> > > but both gave me the same result as bevor. Is this the right endpoint
> > > url for swift? In which config file and with what option do I have to
> > > enter it in the glance configuration?
> > >  
> > >  
> > > Thank you and kind regards,
> > > Michael
> > >  
> > >  
> > > 
> > >  > > > > Sam Morrison <sorri...@gmail.com mailto:sorri...@gmail.com >
> > >  > > > > hat am 18. Juli 2016 um 01:41 geschrieben:
> > > > 
> > > >  Hi Michael,
> > > >   
> > > >  This would indicate that glance can’t find the swift endpoint in
> > > > the keystone catalog.
> > > >   
> > > >  You can either add it to the catalog or specify the swift url in
> > > > the config.
> > > >   
> > > >  Cheers,
> > > >  Sam
> > > >   
> > > > 
> > > > 
> 

Re: [Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-19 Thread Michael Stang
Hi Sam, Hi all,
 
fixed the problem.  I had used
 
project_domain_id = default
user_domain_id = default
 
instead I had to use
 
project_domain_id = 
user_domain_id = 
 
to make it work. Now I can store images over glance in swift, the only problem I
now have is that every user can upload images but only "glance" can delete the
images, when I try this with another user in the project then in the
glance-api.log I get
 
 ClientException: Object HEAD failed:
http://controller:8080/v1/glance/  403 Forbidden
 
 
don't know if something is still wrong or ift this might be a bug?
 
 
Kind regards,
Michael
 
 
 
 

> Michael Stang <michael.st...@dhbw-mannheim.de> hat am 18. Juli 2016 um 08:44
> geschrieben:
> 
>  Hi Sam,
>   
>  thank you for your answer.
>   
>  I had a look, the swift store endpoint ist listet 3 times  in the keystone,
> publicurl admin and internal endpoint. To try, I also set it in the
> glance-api.conf:
>   
>  swift_store_endpoint = http://controller:8080/v1/
>   
>  I also tried
>   
>  swift_store_endpoint = http://controller:8080/v1/AUTH_%(tenant_id)s
>   
>  but both gave me the same result as bevor. Is this the right endpoint url for
> swift? In which config file and with what option do I have to enter it in the
> glance configuration?
>   
>   
>  Thank you and kind regards,
>  Michael
>   
>   
> 
>   > > Sam Morrison <sorri...@gmail.com> hat am 18. Juli 2016 um 01:41
>   > > geschrieben:
> > 
> >   Hi Michael,
> >
> >   This would indicate that glance can’t find the swift endpoint in the
> > keystone catalog.
> >
> >   You can either add it to the catalog or specify the swift url in the
> > config.
> >
> >   Cheers,
> >   Sam
> >
> > 
> > 
> >   > > >   On 15 Jul 2016, at 9:07 PM, Michael Stang
> >   > > > <michael.st...@dhbw-mannheim.de
> >   > > > mailto:michael.st...@dhbw-mannheim.de > wrote:
> > > 
> > >   Hi everyone,
> > >
> > >   I tried to setup swift as backend for glance in our new mitaka
> > > installation. I used this in the glance-api.conf
> > >
> > > 
> > >   [glance_store]
> > >   stores = swift
> > >   default_store = swift
> > >   swift_store_create_container_on_put = True
> > >   swift_store_region = RegionOne
> > >   default_swift_reference = ref1
> > >   swift_store_config_file = /etc/glance/glance-swift-store.conf
> > > 
> > >
> > > 
> > >   and in the glance-swift-store.conf this
> > > 
> > >   [ref1]
> > >   auth_version = 3
> > >   project_domain_id = default
> > >   user_domain_id = default
> > >   auth_address = http://controller:35357/
> > >   user = services:swift
> > >   key = x
> > > 
> > >   When I trie now to upload an image it gets the status "killed" and
> > > this is in the glance-api.log
> > > 
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > [req-0ec16fa5-a605-47f3-99e9-9ab231116f04 de9463239010412d948df4020e9be277
> > > 669e037b13874b6c871
> > >   2b1fd10c219f0 - - -] Failed to upload image
> > > 6de45d08-b420-477b-a665-791faa232379
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > Traceback (most recent call last):
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line
> > > 110, in upload_d
> > >   ata_to_store
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > context=req.context)
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance_store/backend.py", line 344, in
> > > store_add_to_b
> > >   ackend
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > verifier=verifier)
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > > "/usr/lib/python2.7/dist-packages/glance_store/capabilities.py", line 226,
> > > in op_checke
> > >   r
> > >   2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > > return store_op_fun(store, *args, **kwargs)
> > >   2016-07-15 12:21:44.3

Re: [Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-18 Thread Michael Stang
Hi Sam,
 
thank you for your answer.
 
I had a look, the swift store endpoint ist listet 3 times  in the keystone,
publicurl admin and internal endpoint. To try, I also set it in the
glance-api.conf:
 
swift_store_endpoint = http://controller:8080/v1/
 
I also tried
 
swift_store_endpoint = http://controller:8080/v1/AUTH_%(tenant_id)s
 
but both gave me the same result as bevor. Is this the right endpoint url for
swift? In which config file and with what option do I have to enter it in the
glance configuration?
 
 
Thank you and kind regards,
Michael
 
 

> Sam Morrison <sorri...@gmail.com> hat am 18. Juli 2016 um 01:41 geschrieben:
> 
>  Hi Michael,
>   
>  This would indicate that glance can’t find the swift endpoint in the keystone
> catalog.
>   
>  You can either add it to the catalog or specify the swift url in the config.
>   
>  Cheers,
>  Sam
>   
> 
> 
>  > >  On 15 Jul 2016, at 9:07 PM, Michael Stang
>  > > <michael.st...@dhbw-mannheim.de mailto:michael.st...@dhbw-mannheim.de
>  > > > wrote:
> > 
> >  Hi everyone,
> >   
> >  I tried to setup swift as backend for glance in our new mitaka
> > installation. I used this in the glance-api.conf
> >   
> > 
> >  [glance_store]
> >  stores = swift
> >  default_store = swift
> >  swift_store_create_container_on_put = True
> >  swift_store_region = RegionOne
> >  default_swift_reference = ref1
> >  swift_store_config_file = /etc/glance/glance-swift-store.conf
> > 
> >   
> > 
> >  and in the glance-swift-store.conf this
> > 
> >  [ref1]
> >  auth_version = 3
> >  project_domain_id = default
> >  user_domain_id = default
> >  auth_address = http://controller:35357/
> >  user = services:swift
> >  key = x
> > 
> >  When I trie now to upload an image it gets the status "killed" and this
> > is in the glance-api.log
> > 
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > [req-0ec16fa5-a605-47f3-99e9-9ab231116f04 de9463239010412d948df4020e9be277
> > 669e037b13874b6c871
> >  2b1fd10c219f0 - - -] Failed to upload image
> > 6de45d08-b420-477b-a665-791faa232379
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > Traceback (most recent call last):
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 110,
> > in upload_d
> >  ata_to_store
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > context=req.context)
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > "/usr/lib/python2.7/dist-packages/glance_store/backend.py", line 344, in
> > store_add_to_b
> >  ackend
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > verifier=verifier)
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > "/usr/lib/python2.7/dist-packages/glance_store/capabilities.py", line 226,
> > in op_checke
> >  r
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils return
> > store_op_fun(store, *args, **kwargs)
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py",
> > line 532, in a
> >  dd
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > allow_reauth=need_chunks) as manager:
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py",
> > line 1170, in
> >  get_manager_for_store
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils store,
> > store_location, context, allow_reauth)
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/connection_manager.py",
> > l
> >  ine 64, in __init__
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
> > self.storage_url = self._get_storage_url()
> >  2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
> > "/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/connection_manager.py",
> > l
> >  ine 160, in _get_storage_url
> >  2016-07-15 12:21:44.379 14230 ER

[Openstack-operators] OpenStack mitaka using swift as backend for glance

2016-07-15 Thread Michael Stang
Hi everyone,
 
I tried to setup swift as backend for glance in our new mitaka installation. I
used this in the glance-api.conf
 

[glance_store]
stores = swift
default_store = swift
swift_store_create_container_on_put = True
swift_store_region = RegionOne
default_swift_reference = ref1
swift_store_config_file = /etc/glance/glance-swift-store.conf

 

and in the glance-swift-store.conf this

[ref1]
auth_version = 3
project_domain_id = default
user_domain_id = default
auth_address = http://controller:35357
user = services:swift
key = x

When I trie now to upload an image it gets the status "killed" and this is in
the glance-api.log

2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
[req-0ec16fa5-a605-47f3-99e9-9ab231116f04 de9463239010412d948df4020e9be277
669e037b13874b6c871
2b1fd10c219f0 - - -] Failed to upload image 6de45d08-b420-477b-a665-791faa232379
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils Traceback (most
recent call last):
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
"/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 110, in
upload_d
ata_to_store
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
context=req.context)
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
"/usr/lib/python2.7/dist-packages/glance_store/backend.py", line 344, in
store_add_to_b
ackend
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
verifier=verifier)
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
"/usr/lib/python2.7/dist-packages/glance_store/capabilities.py", line 226, in
op_checke
r
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils return
store_op_fun(store, *args, **kwargs)
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
"/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py", line
532, in a
dd
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils
allow_reauth=need_chunks) as manager:
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
"/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py", line
1170, in
get_manager_for_store
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils store,
store_location, context, allow_reauth)
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
"/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/connection_manager.py",
l
ine 64, in __init__
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils self.storage_url
= self._get_storage_url()
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils File
"/usr/lib/python2.7/dist-packages/glance_store/_drivers/swift/connection_manager.py",
l
ine 160, in _get_storage_url
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils raise
exceptions.BackendException(msg)
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils BackendException:
Cannot find swift service endpoint : The resource could not be found. (HTTP
404)
2016-07-15 12:21:44.379 14230 ERROR glance.api.v1.upload_utils

 

anyone an idea what i'm missing in the config file oder what might be the
problem?

Thanks and kind regards,
Michael___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Instance no bootable device

2016-07-14 Thread Michael Stang
Hi all,
 
solved it, somehow it was not working because the ocfs2 we use as
cluster-storage for the instances on the compute nodes throwed errors in the
syslog. It was somehow because it was fromated with 4k bklocksize, reformated it
now with 1k blocksize and now it's working without problems...strange...
 
Another question has arised, we had in our old juno installation swift used as
backend for glance, but the old method is not working in mitaka anymore. Do
anyone nows a guide for setting up swift as glance backend in mitaka?
 
Thanks and kind regards,
Michael
 
 

> Michael Stang <michael.st...@dhbw-mannheim.de> hat am 13. Juli 2016 um 09:18
> geschrieben:
> 
>  Hi all,
>   
>  I set up a new openstack environment with mitaka and was now to the point
> where I should be able to start an instance. When I do this the instance
> starts and shows running without error, but when I open the console I see
> this:
>   
> 
>  Booting from Hard Disk...
>  Boot failed: not a bootable disk
> 
>  No bootable device.
> 
>   
> 
>  I already looked through the logfiles, but so far I did not find the reason
> why the instance is not booting. Did anyone had this problem or know what
> might be the reason for this?
> 
>   
> 
>  Thank and kind regards,
> 
>  Michael
> 

 

> ___
>  OpenStack-operators mailing list
>  OpenStack-operators@lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

 ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Instance no bootable device

2016-07-13 Thread Michael Stang
Hi all,
 
I set up a new openstack environment with mitaka and was now to the point where
I should be able to start an instance. When I do this the instance starts and
shows running without error, but when I open the console I see this:
 

Booting from Hard Disk...
Boot failed: not a bootable disk

No bootable device.

 

I already looked through the logfiles, but so far I did not find the reason why
the instance is not booting. Did anyone had this problem or know what might be
the reason for this?

 

Thank and kind regards,

Michael___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Migration Openstack Instances and cloud-init

2016-07-05 Thread Michael Stang
Hi Hamza,
 
thank you for your answere. I learnd much about the cloud-init in this process.
 
I found now a way to config this without maipulating the cloud.cfg on the
intance directly. I start the new instance from the copied snapshot with
user-data I can inject over the "configurations" tab of a new openstack
instance. I use the following config for this:
 

#cloud-config
password: 
chpasswd: { expire: False }
ssh_pwauth: True
preserve_hostname: True



I think I can do some more options which may be useful, but this works fine for
me by now :-)
 
Kind regards,
Michael
 

> Achi Hamza <h16m...@gmail.com> hat am 5. Juli 2016 um 12:36 geschrieben:
> 
>  Hi Michael,
>   
>  Yes, you can comment out the modules that you don't need.
>   
>  I don't think it is a good idea to completely disable cloud-init
> functionality. If you do so, the instances wouldn't be able to automatically
> set hostnames, disk resize, SSH keys injection and more other benefits.
>   
>  If you would like to use password for authentication you need to change the
> property lock_passwd: True to lock_passwd: False under the section system_info
> in the /etc/cloud/cloud.cfg config file.
>   
>  I do not actually know what process invokes cloud-init.
>   
>  I hope the above helps.
>   
>  Regards,
>  Hamza
> 
>  On 4 July 2016 at 08:31, Michael Stang <michael.st...@dhbw-mannheim.de
> mailto:michael.st...@dhbw-mannheim.de > wrote:
>> >Hi Hamza,
> > 
> >thank you for your answere. So I have to comment out the modules i don't
> > want to run when I understand this right?
> > 
> >Why are the changes made to the instance when I start a snapshot of the
> > instance and not every time when I start the instance itself? What event
> > triggers the cloud-init to make this changes?
> > 
> >What would happen when I remove the cloud-init package, will the instance
> > still be working and what disadventages would I have then?
> > 
> >Thank you and kind regards,
> >Michael
> > 
> > 
> > 
> > > > > Achi Hamza <h16m...@gmail.com mailto:h16m...@gmail.com > hat am 2.
> > > > > Juli 2016 um 14:30 geschrieben:
> > > 
> > > 
> > > Hi Michael,
> > >  
> > > You can change this behavior in the /etc/cloud/cloud.cfg config file.
> > >  
> > > Regards,
> > > Hamza
> > > 
> > > On 1 July 2016 at 07:02, Michael Stang <michael.st...@dhbw-mannheim.de
> > > mailto:michael.st...@dhbw-mannheim.de > wrote:
> > >   > > > >   Hi all,
> > > >
> > > >   I tried to copy instances (based on the ubuntu cloud image) from
> > > > our production cloud to our test cloud according to this description
> > > >
> > > > 
> > > >  
> > > > http://docs.openstack.org/user-guide/cli_use_snapshots_to_migrate_instances.html
> > > >
> > > >   but when I start the instance on the test cloud the root password
> > > > ist resetet and it seems that the cloud-init is invoked and change the
> > > > instance like it is the first start. Is there a way to prevent this
> > > > behaviour so the instance stays unchanged when I start it on the test
> > > > cloud?
> > > >
> > > >   Thanks and kind regards,
> > > >   Michael
> > > > 
> > > >   ___
> > > >   OpenStack-operators mailing list
> > > >   OpenStack-operators@lists.openstack.org
> > > > mailto:OpenStack-operators@lists.openstack.org
> > > > 
> > > >  
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > > > > > > 
> > >> > 
> > 
> >Viele Grüße
> > 
> >Michael Stang
> >Laboringenieur, Dipl. Inf. (FH)
> > 
> >Duale Hochschule Baden-Württemberg Mannheim
> >Baden-Wuerttemberg Cooperative State University Mannheim
> >ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
> >Fachbereich Informatik, Fakultät Technik
> >Coblitzallee 1-9
> >68163 Mannheim
> > 
> >Tel.: +49 (0)621 4105 - 1367
> >michael.st...@dhbw-mannheim.de mailto:michael.st...@dhbw-mannheim.de
> >http://www.dhbw-mannheim.de
> > 
> > 
> > 
> > 
> > 
> >  > 

 ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Migration Openstack Instances and cloud-init

2016-07-04 Thread Michael Stang
Hi Hamza,
 
thank you for your answere. So I have to comment out the modules i don't want to
run when I understand this right?
 
Why are the changes made to the instance when I start a snapshot of the instance
and not every time when I start the instance itself? What event triggers the
cloud-init to make this changes?
 
What would happen when I remove the cloud-init package, will the instance still
be working and what disadventages would I have then?
 
Thank you and kind regards,
Michael
 
 

> Achi Hamza <h16m...@gmail.com> hat am 2. Juli 2016 um 14:30 geschrieben:
> 
>  Hi Michael,
>   
>  You can change this behavior in the /etc/cloud/cloud.cfg config file.
>   
>  Regards,
>  Hamza
> 
>  On 1 July 2016 at 07:02, Michael Stang <michael.st...@dhbw-mannheim.de
> mailto:michael.st...@dhbw-mannheim.de > wrote:
>> >Hi all,
> > 
> >I tried to copy instances (based on the ubuntu cloud image) from our
> > production cloud to our test cloud according to this description
> > 
> > 
> >   
> > http://docs.openstack.org/user-guide/cli_use_snapshots_to_migrate_instances.html
> > 
> >but when I start the instance on the test cloud the root password ist
> > resetet and it seems that the cloud-init is invoked and change the instance
> > like it is the first start. Is there a way to prevent this behaviour so the
> > instance stays unchanged when I start it on the test cloud?
> > 
> >Thanks and kind regards,
> >Michael
> > 
> >___
> >OpenStack-operators mailing list
> >OpenStack-operators@lists.openstack.org
> > mailto:OpenStack-operators@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >  > 

 
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Migration Openstack Instances and cloud-init

2016-07-01 Thread Michael Stang
Hi all,
 
I tried to copy instances (based on the ubuntu cloud image) from our production
cloud to our test cloud according to this description
 
http://docs.openstack.org/user-guide/cli_use_snapshots_to_migrate_instances.html
 
but when I start the instance on the test cloud the root password ist resetet
and it seems that the cloud-init is invoked and change the instance like it is
the first start. Is there a way to prevent this behaviour so the instance stays
unchanged when I start it on the test cloud?
 
Thanks and kind regards,
Michael___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Data-Migration Juno -> Mitaka

2016-06-29 Thread Michael Stang
Hi Blair,
 
thank you for your answere. The Tool Roland suggested was what we are looking
for, we want to migrate the enduser data from one cloud to another.
 
Your suggestion with the database transfer sounds also interessting but if i
dump my Juno DB and import it into the Mitaka TestDB, would this work? AFAIK the
DB changes also between versions of OpenStack, is it possible to import and
"old" DB and get it to work in a newer version?
 
Regards,
Michael
 
 
 

> Blair Bethwaite <blair.bethwa...@gmail.com> hat am 29. Juni 2016 um 02:43
> geschrieben:
>
>
> Hi Roland -
>
> GUTS looks cool! But I took Michael's question to be more about
> control plane data than end-user instances etc...?
>
> Michael - If that's the case then you probably want to start with
> dumping your present Juno DBs, importing into your Mitaka test DB and
> then attempting the migrations to get to Mitaka, if they work then you
> might be able to bring up a "clone cloud" (of course there is probably
> a whole lot of network specific config in there that won't work unless
> you are doing this in a separate/isolated name-and-address space,
> there's also all the config files...). Also, as others have noted on
> this list recently, live upgrades are only supported/tested(?) between
> successive versions.
>
> Cheers,
>
> On 29 June 2016 at 09:54, Roland Chan <rol...@aptira.com> wrote:
> > Hi Michael
> >
> > We built a tool called GUTS to migrate various assets between OpenStack
> > deployment (and other things as well). You can check it out at
> > https://github.com/aptira/guts. It can migrate Instances, Volumes, Networks,
> > Tenants, Users and Security Groups from one OpenStack to another.
> >
> > It's a work in progress, but we're always happy to accept input.
> >
> > Hope this helps, feel free to contact me if you need anything.
> >
> > Roland
> >
> >
> >
> > On 28 June 2016 at 16:07, Michael Stang <michael.st...@dhbw-mannheim.de>
> > wrote:
> >>
> >> Hello all,
> >>
> >>
> >>
> >> we setup a small test environment of Mitaka to learn about the
> >> installation and the new features. Before we try the Upgrade of out Juno
> >> production environment we want to migrate all it’s data to the Mitaka
> >> installation as a backup and also to make tests.
> >>
> >>
> >>
> >> Is there an easy way to migrate the data from the Juno environment to the
> >> mitaka environment or has this to be done manually piece by piece? I found
> >> already a tool named CloudFerry but the instructions to use it are not much
> >> and also there seems to be no support for mitaka by now, is there any other
> >> software/tool to help for migrating data?
> >>
> >>
> >>
> >> Thanks and kind regards,
> >>
> >> Michael
> >>
> >>
> >> _______
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
>
>
> --
> Cheers,
> ~Blairo
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Data-Migration Juno -> Mitaka

2016-06-28 Thread Michael Stang
Hello all,

 

we setup a small test environment of Mitaka to learn about the installation and 
the new features. Before we try the Upgrade of out Juno production environment 
we want to migrate all it’s data to the Mitaka installation as a backup and 
also to make tests.

 

Is there an easy way to migrate the data from the Juno environment to the 
mitaka environment or has this to be done manually piece by piece? I found 
already a tool named CloudFerry but the instructions to use it are not much and 
also there seems to be no support for mitaka by now, is there any other 
software/tool to help for migrating data?

 

Thanks and kind regards,

Michael

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-28 Thread Michael Stang
Thank you for the information :-)

 I never used Ceph RDB before but we will have a look into it to consider if 
this is an option for us.

Regards,
Michael

-Ursprüngliche Nachricht-
Von: Jonathan D. Proulx [mailto:j...@csail.mit.edu] 
Gesendet: Dienstag, 21. Juni 2016 16:51
An: Michael Stang
Cc: Matt Jarvis; OpenStack Operators
Betreff: Re: [Openstack-operators] Shared Storage for compute nodes

On Tue, Jun 21, 2016 at 11:42:45AM +0200, Michael Stang wrote:
:I think I have asked my question not correctly, it is not for the cinder 
:backend, I meant the shared storage for the instances which is shared by the 
:compute nodes. Or can cinder also be used for this? Sorry if I ask stupid 
:questions, OpenStack is still new for me ;-)


We use Ceph RBD for:

Nova ephemeral storage
Cinder Volume storage
Glance Image storage

(and ceph for object storage too)

/var/lib/nova which holds the libvirt xml files that actually define instances 
live on local node staorage.

This is sufficient for us to do live migration.  However as of Kilo at least 
'vacating' a failed node doesn't work as it assumes /var/lib/nova is on shared 
storage if the ephemeral storage is shared even though the xml could be 
recreated from the database.  I don't know if Juno or Mitaka still have this 
issue or not.

If I were trying to solve that I'd probably go with NFS for /var/lib/nova as 
it's easy, storage is small (just text files) and load is light.  But we've 
been very happy with ceph rbd for ephemeral storage.

Our use case is private cloud w/ 80 hypervisors and about 1k running VMs 
supported by a team of two (each of whom has other responsibilites as well).  
Ceph is 3 monitors and 9 storage nodes with 370T raw storage ( with triple 
replication net storage is 1/3 of that.)

-Jon

: 
:Regards,
:Michael
: 
:
:> Matt Jarvis <matt.jar...@datacentred.co.uk> hat am 21. Juni 2016 um 10:21 :> 
geschrieben:
:>
:>  If you look at the user survey (
:> https://www.openstack.org/user-survey/survey-2016-q1/landing ) you can see 
:> what the current landscape looks like in terms of deployments. Ceph is by 
far :> the most commonly used storage backend for Cinder. 
:>
:>  On 21 June 2016 at 08:27, Michael Stang <michael.st...@dhbw-mannheim.de :> 
mailto:michael.st...@dhbw-mannheim.de > wrote:
:>> >Hi,
:> > 
:> >I wonder what is the recommendation for a shared storage for the compute
:> > nodes? At the moment we are using an iSCSI device which is served to all 
:> > compute nodes with multipath, the filesystem is OCFS2. But this makes it a 
:> > little unflexible in my opinion, because you have to decide how many 
compute :> > nodes you will have in the future.
:> > 
:> >So is there any suggestion which kind of shared storage to use for the
:> > compute nodes and what filesystem?
:> > 
:> >Thanky,
:> >Michael
:> > 
:> > 
:> >___
:> >OpenStack-operators mailing list
:> >OpenStack-operators@lists.openstack.org
:> > mailto:OpenStack-operators@lists.openstack.org
:> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
:> >  >
:>  DataCentred Limited registered in England and Wales no. 05611763 :>
:
: 
:Viele Grüße
:
:Michael Stang
:Laboringenieur, Dipl. Inf. (FH)
:
:Duale Hochschule Baden-Württemberg Mannheim :Baden-Wuerttemberg Cooperative 
State University Mannheim :ZeMath Zentrum für 
mathematisch-naturwissenschaftliches Basiswissen :Fachbereich Informatik, 
Fakultät Technik :Coblitzallee 1-9
:68163 Mannheim
:
:Tel.: +49 (0)621 4105 - 1367
:michael.st...@dhbw-mannheim.de
:http://www.dhbw-mannheim.de

:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-- 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hello Sampath,
 
no I haven't read this one yet, thank you I will go through it.
 
Regards,
Michael

> Sam P <sam47pr...@gmail.com> hat am 21. Juni 2016 um 09:55 geschrieben:
>
>
> Hi,
>
> Hope you have already gone through this document... if not FYI
> http://docs.openstack.org/ops-guide/arch_storage.html
>
> As Saverio said, Ceph is widely adopted solution.
> For small clouds, we found that NFS is much affordable solution in
> terms of cost and the complexity.
>
> --- Regards,
> Sampath
>
>
>
> On Tue, Jun 21, 2016 at 4:42 PM, Saverio Proto <ziopr...@gmail.com> wrote:
> > Hello Michael,
> >
> > a very widely adopted solution is to use Ceph with rbd volumes.
> >
> > http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html
> > http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> >
> > you find more options here under Volume drivers:
> > http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html
> >
> > Saverio
> >
> >
> > 2016-06-21 9:27 GMT+02:00 Michael Stang <michael.st...@dhbw-mannheim.de>:
> >> Hi,
> >>
> >> I wonder what is the recommendation for a shared storage for the compute
> >> nodes? At the moment we are using an iSCSI device which is served to all
> >> compute nodes with multipath, the filesystem is OCFS2. But this makes it a
> >> little unflexible in my opinion, because you have to decide how many
> >> compute
> >> nodes you will have in the future.
> >>
> >> So is there any suggestion which kind of shared storage to use for the
> >> compute nodes and what filesystem?
> >>
> >> Thanky,
> >> Michael
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hello Saverio,
 
thank you I will have a look at these documents.
 
Michael

> Saverio Proto <ziopr...@gmail.com> hat am 21. Juni 2016 um 09:42 geschrieben:
>
>
> Hello Michael,
>
> a very widely adopted solution is to use Ceph with rbd volumes.
>
> http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html
> http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>
> you find more options here under Volume drivers:
> http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html
>
> Saverio
>
>
> 2016-06-21 9:27 GMT+02:00 Michael Stang <michael.st...@dhbw-mannheim.de>:
> > Hi,
> >
> > I wonder what is the recommendation for a shared storage for the compute
> > nodes? At the moment we are using an iSCSI device which is served to all
> > compute nodes with multipath, the filesystem is OCFS2. But this makes it a
> > little unflexible in my opinion, because you have to decide how many compute
> > nodes you will have in the future.
> >
> > So is there any suggestion which kind of shared storage to use for the
> > compute nodes and what filesystem?
> >
> > Thanky,
> > Michael
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
Viele Grüße

Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

Tel.: +49 (0)621 4105 - 1367
michael.st...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hi,
 
I wonder what is the recommendation for a shared storage for the compute nodes?
At the moment we are using an iSCSI device which is served to all compute nodes
with multipath, the filesystem is OCFS2. But this makes it a little unflexible
in my opinion, because you have to decide how many compute nodes you will have
in the future.
 
So is there any suggestion which kind of shared storage to use for the compute
nodes and what filesystem?
 
Thanky,
Michael
 ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-19 Thread Michael Stang
I think we give it a try then, thank you :-)
 
Kind regards,
Michael

> Simon Leinen <simon.lei...@switch.ch> hat am 18. Juni 2016 um 16:21
> geschrieben:
>
>
> Michael Stang writes:
> > Is this one the actual guid for upgrades, and is it valid for every
> > upgrade or ony for specific versions?:
> > http://docs.openstack.org/ops-guide/ops_upgrades.html
>
> Yes, that's part of the official Operations Guide. It is not
> version-specific. The examples are based on Ubuntu as the underlying OS
> distribution. But the approach and recommendations are general.
> --
> Simon.
 ___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Upgrade OpenStack Juno to Mitaka

2016-06-15 Thread Michael Stang
Hi,
 
my name is Michael, I am working at the Baden-Wuerttemberg Cooperative State
University Mannheim.
 
We have an installation of OpenStack with roundabout 14 nodes each 32 cores (1
controller (glance,keystone, etc.), 1 neutron, 2 objectstores, 1 blockstore, 9
compute nodes) the blockstore, the objectstore and the compute nodes uses a
storage node over iSCSI with multipath to store the data, virtual machines etc.
The glance image service uses the objectstore as storage for the images.
 
At the moment we are running Juno release and we want to upgrade the
installation to Mitaka without loosing any data (users, images, volumes, virtual
machines, etc.). I already try to find documentation how such an upgrade should
be performed but I didn't find any well described how to do this.
 
So the following questions has arised:
 
What is the best way to performe an upgrade from Juno to Mitaka?
Is the best way to upgrade from Juno -> Kilo -> Liberty -> Mitaka or is it
possible to migrate directly to mitaka?
Is it better to perform an inplace upgrade or is the beter solution to setup a
new environment?
What is the best way to save the existing data to import it in an new
environment?
Is there any well described how to or best practise guide for such an upgrade?
 
Any ideas or help would be welcome :-)
 
Kind regards,
Michael



Michael Stang
Laboringenieur, Dipl. Inf. (FH)

Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University Mannheim
ZeMath Zentrum für mathematisch-naturwissenschaftliches Basiswissen
Fachbereich Informatik, Fakultät Technik
Coblitzallee 1-9
68163 Mannheim

zem...@dhbw-mannheim.de
http://www.dhbw-mannheim.de___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators