On Apr 1, 2017 4:31 PM, "Jorge Luiz Corrêa" wrote:
There are some researchers that already have some docker images with
workflows. So I would like to run docker images.
:)
I'm thinking that that wiki is not up to date.
The nova-dockers driver is a relic from long ago, and
Excerpts from Jorge Luiz Corrêa's message of 2017-04-01 17:17:29 -0300:
> There are some researchers that already have some docker images with
> workflows. So I would like to run docker images.
>
> :)
>
> I'm thinking that that wiki is not up to date.
Docker is not suitable for the
For anyone out there facing similar issues my problem was due to the
following line in /etc/sysconfig/iptables
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
As soon all forward rules were permitted my problem has been solved.
Best regards,
G.
I have installed on Ubuntu, so I
Hi all,
Please, could anyone help me about the problem showed in the follwing
email?
Regards
Ignazio
-- Messaggio inoltrato --
Da: "Ignazio Cassano"
Data: 31/Mar/2017 16:52
Oggetto: newton heat stack glance error
A: "OpenStack Operators"
Just circling back on this for posterity, in case it helps someone else with a
similar issue:
We found that this issue is a bug in the XIO cinder driver and XIO management
server code related to their Glance image caching implementation. Cinder
volumes that were created as snapshots behind
Hi!
I understand that OpenStack allocates selfservice IP addresses to VMs
from a DHCP server.
Is there a way to instruct this DHCP server to always allocate the same
internal (selfservice network) IP address
to a specific VM based on its MAC address for example?
Regards,
G.
Adding "heat" to the subject line. Heat people, I've responded to
Ignazio on the operators' list [0], but you may have some helpful info.
[0] (would link here, but can't find the April 2017 page in pipermail)
Thread starts here:
For people dealing with the same problem I was able to overcome the
problem by installing the "openstack-ec2-api" package from the
centos-openstack-ocata repository.
Although the binaries were exactly the same as mine (did a checksum)
installing the package revealed a much more detailed
you can use fixed-ip option for specific vm
Regards,
> 2017. 4. 2. 오전 12:11, Georgios Dimitrakakis 작성:
>
> Hi!
>
> I understand that OpenStack allocates selfservice IP addresses to VMs from a
> DHCP server.
> Is there a way to instruct this DHCP server to always
Hi,
Le 2017-04-01 17:11, Georgios Dimitrakakis a écrit :
Hi!
I understand that OpenStack allocates selfservice IP addresses to VMs
from a DHCP server.
Is there a way to instruct this DHCP server to always allocate the
same internal (selfservice network) IP address
to a specific VM based on its
Good to hear Mike,
we did a project where we did tried to use XIO but it was not working well..
About time the start to put some time to push some updates.
Remo
> On Apr 1, 2017, at 07:50, Mike Smith wrote:
>
> Just circling back on this for posterity, in case it
There are some researchers that already have some docker images with workflows.
So I would like to run docker images.
:)
I'm thinking that that wiki is not up to date.
Tks
Sent from my iPhone
> On 31 Mar 2017, at 19:27, Martinx - ジェームズ wrote:
>
> Why not LXD
Dear OpenStackers,
With Pike developmet being started for some time now, it is a great
time to have a meeting to coordinate our efforts !
That is why we will have an IRC meeting Tuesday 11th of April 12:30 UTC
on #cloudkitty.
I hope to see many of you
All the best
Christophe
On Sat, Apr 1, 2017 at 5:21 PM, Matt Riedemann wrote:
> On 4/1/2017 8:36 AM, Blair Bethwaite wrote:
>
>> Hi all,
>>
>> The below was suggested for a Forum session but we don't yet have a
>> submission or name to chair/moderate. I, for one, would certainly be
>> interested in
On 4/1/2017 8:36 AM, Blair Bethwaite wrote:
Hi all,
The below was suggested for a Forum session but we don't yet have a
submission or name to chair/moderate. I, for one, would certainly be
interested in providing input. Do we have any owners out there?
Resource reservation requirements:
==
The
At our site, we've seen bugs in idempotence break our system too.
In once case, it was an edge case of the master server going uncontactable at
just the wrong time for a few seconds, causing the code to (wrongly) believe
that keys didnt exist and needed to be recreated, then network
On 4/1/2017 12:17 PM, Jay Bryant wrote:
Matt,
I think discussion on this goes all the way back to Tokyo. There was
work on the Cinder side to send the notification to Nova which I believe
all the pieces were in place for. The missing part (sticking point) was
doing a rescan of the SCSI bus in
On 3/30/2017 10:28 PM, Tom Fifield wrote:
Hi all,
Forum topic submission closes in 2 days (Sunday 23:59 UTC).
One of the types of topics you could consider submitting is a user/dev
feedback session for your project. I see Swift, Keystone and Kolla have
already done this - thanks!
From
On 4/1/2017 4:07 PM, Matt Riedemann wrote:
On 4/1/2017 12:17 PM, Jay Bryant wrote:
Matt,
I think discussion on this goes all the way back to Tokyo. There was
work on the Cinder side to send the notification to Nova which I believe
all the pieces were in place for. The missing part (sticking
Great idea, happy to try it out for Trove. We love o.m.rpc :) But it needs to
be secure; other comment has been posed in review, I'm doing a talk about o.m
use by trove in Boston anyway, maybe we can get Melissa to join me for that?
-amrith
-Original Message-
From: Deja, Dawid
Thanks for the reminder!
2017-04-01 2:28 GMT+08:00 Andreas Jaeger :
> Since this morning, part of logs.openstack.org is corrupt due to a
> downtime of one of the backing stores. The infra admins are currently
> running fsck and then will take everything back in use.
>
> Right now,
There was a time when this feature had been both proposed in Cinder [1] and
Nova [2], but unfortunately no one (correct me if I am wrong) is going to
handle this feature during Pike. We do think extending an online volume is
a beneficial and mostly supported by venders feature. We really don't
On Tue, Mar 28, 2017, at 05:14 PM, Jeremy Stanley wrote:
> The Mailman listserv on lists.openstack.org will be offline for an
> upgrade-related maintenance for up to 3 hours (but hopefully much
> less) starting at 20:00 UTC March 31, this coming Friday. This
> activity is scheduled for a
Hey folks,
I wanted to raise awareness of the concept of idempotence[0] and how
it affects deployment(s). In the puppet world, we consider this very
important because since puppet is all about ensuring a desired state
(ie. a system with config files + services). That being said, I feel
that it
Andreas,
looks like we are past the POST_FAILURE's. However the log links seem
to redirect to a 404.
example in https://review.openstack.org/#/c/451964/, the log link is:
http://logs.openstack.org/64/451964/7/check/gate-k8s-cloud-provider-golang-dsvm-conformance-ubuntu-xenial/569f22a/
which
Never mind :) looks like it was a transient issue from last night.
logs show up correctly with a recheck
Thanks,
Dims
On Sat, Apr 1, 2017 at 9:28 AM, Davanum Srinivas wrote:
> Andreas,
>
> looks like we are past the POST_FAILURE's. However the log links seem
> to redirect to
Here comes the all-green again ;)
On 2017-03-31 20:28, Andreas Jaeger wrote:
> Since this morning, part of logs.openstack.org is corrupt due to a
> downtime of one of the backing stores. The infra admins are currently
> running fsck and then will take everything back in use.
>
> Right now, we
Hi Brian, when I enable v1 and v2 api in glance-api.conf , using the same
heat template I got http 500.
Now in my glance-api.conf I have the same configuration used on ubuntu 16
(whete heat works fine):
#enable_v1_api = false
#enable_v1_registry = true
#enable_v2_api = true
#enable_v2_registry =
Hi Brian,
I found where is the problem
In glance-api.conf I must setup the registry_host entry and it must be
equal to the bind_host entry in the file glance-registry.conf.
So, enable v1 api and the abive parameters, heat works.
My problem is that I am using 3 controllers and the bind_host in the
Hello Ignazio. A few things:
(1) You can run both glance v1 and v2 simultaneously. They share the
same database and storage backend. The difference is that you, as a
client, interact with the APIs in different ways (v1 does a lot of
info-passing in http headers, v2 does everything in JSON; v2
Hi all,
The below was suggested for a Forum session but we don't yet have a
submission or name to chair/moderate. I, for one, would certainly be
interested in providing input. Do we have any owners out there?
Resource reservation requirements:
==
The Blazar project
On Tue, Mar 28, 2017, at 05:14 PM, Jeremy Stanley wrote:
> The Mailman listserv on lists.openstack.org will be offline for an
> upgrade-related maintenance for up to 3 hours (but hopefully much
> less) starting at 20:00 UTC March 31, this coming Friday. This
> activity is scheduled for a
Hi, thanks for your answer.
Heat newton on ubuntu 16.04 works fine even if images are under /v2.
On centos 7, same configuration, but it does not work
2017-03-31 20:42 GMT+02:00 Basil Baby :
> Default seems to v1
> https://github.com/openstack/heat/blob/stable/newton/heat/
Hi again, I hope the configuration in previous message will work when I 'll
enable the cluster.
I am going to test it.
Many thanks for your help.
Ignazio
Il 01/Apr/2017 15:22, "Ignazio Cassano" ha
scritto:
> Hi Brian,
> I found where is the problem
> In glance-api.conf
On 2017-04-01 15:28, Davanum Srinivas wrote:
> Andreas,
>
> looks like we are past the POST_FAILURE's. However the log links seem
> to redirect to a 404.
>
> example in https://review.openstack.org/#/c/451964/, the log link is:
>
On 3/31/2017 8:55 PM, TommyLike Hu wrote:
There was a time when this feature had been both proposed in Cinder [1]
and Nova [2], but unfortunately no one (correct me if I am wrong) is
going to handle this feature during Pike. We do think extending an
online volume is a beneficial and mostly
Matt,
I think discussion on this goes all the way back to Tokyo. There was
work on the Cinder side to send the notification to Nova which I believe
all the pieces were in place for. The missing part (sticking point) was
doing a rescan of the SCSI bus in the node that had the extended volume
Now the system is up. Just push a recheck. It must be fine.
Thanks,
Trinath Somanchi | HSDC, GSD, DN | NXP – Hyderabad –INDIA.
From: ChangBo Guo [mailto:glongw...@gmail.com]
Sent: Saturday, April 01, 2017 7:17 AM
To: OpenStack Development Mailing List (not for usage questions)
I know we've talked about this over and over and another bug [1]
reminded me of it. We have long talked about removing the ability to
specify a block device name when creating a server or attaching a volume
because we can't honor the requested device name anyway and trying to do
so just causes
I know we've talked about this over and over and another bug [1]
reminded me of it. We have long talked about removing the ability to
specify a block device name when creating a server or attaching a volume
because we can't honor the requested device name anyway and trying to do
so just causes
Tested configuration in the cluster. Heat now works as expexted.
Registry_host address must be the host address not the cluster vip.
Regards
Ignazio
Il 01/Apr/2017 16:40, "Ignazio Cassano" ha
scritto:
> Hi again, I hope the configuration in previous message will work
41 matches
Mail list logo