Jimmy,
While it's not a clash within the forum, there are two sessions for Ironic
scheduled at the same time on Tuesday at 14h20, each of which has Julia as a
speaker.
Tim
-Original Message-
From: Jimmy McArthur
Reply-To: "OpenStack Development Mailing List (not for usage
Doug,
Thanks for raising this. I'd like to highlight the goal "Finish moving legacy
python-*client CLIs to python-openstackclient" from the etherpad and propose
this for a T/U series goal.
To give it some context and the motivation:
At CERN, we have more than 3000 users of the OpenStack
Found the previous discussion at
http://lists.openstack.org/pipermail/openstack-operators/2016-August/011321.html
from 2016.
Tim
-Original Message-
From: Tim Bell
Date: Saturday, 15 September 2018 at 14:38
To: "OpenStack Development Mailing List (not for usage ques
One extra user motivation that came up during past forums was to have a
different quota for shelved instances (or remove them from the project quota
all together). Currently, I believe that a shelved instance still counts
towards the instances/cores quota thus the reduction of usage by the user
So +1
Tim
From: Lance Bragstad
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, 12 September 2018 at 20:43
To: "OpenStack Development Mailing List (not for usage questions)"
, OpenStack Operators
Subject: [openstack-dev] [all] Consistent policy
Saverio,
And thanks for all your hard work with the openstack community, especially the
Swiss OpenStack user group (https://www.meetup.com/openstack-ch/)
Hope to have a chance to work again together in the future.
Tim
From: Jimmy McArthur
Date: Thursday, 6 September 2018 at 18:06
To: Amy
such as preserving IP
addresses etc.
Sounds like a good topic for PTG/Forum?
Tim
-Original Message-
From: Jay Pipes
Date: Wednesday, 29 August 2018 at 22:12
To: Dan Smith , Tim Bell
Cc: "openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] [nova][cinder][neut
I've not followed all the arguments here regarding internals but CERN's
background usage of Cells v2 (and thoughts on impact of cross cell migration)
is below. Some background at
https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern.
Some rough parameters with
You may also need something like pre-emptible instances to arrange the clean up
of opportunistic VMs when the owner needs his resources back. Some details on
the early implementation at
http://openstack-in-production.blogspot.fr/2018/02/maximizing-resource-utilization-with.html.
If you're in
Has anyone experience of working with local disks or volumes with
physical/logical block sizes of 4K rather than 512?
There seems to be KVM support for this
(http://fibrevillage.com/sysadmin/216-how-to-make-qemu-kvm-accept-4k-sector-sized-disks)
but I could not see how to get the appropriate
Allison,
In the past, there has been some confusion on the ML2 driver since many of the
drivers are both ML2 based and have specific drivers. Had you an approach in
mind for this time?
It does mean that the results won’t be directly comparable but cleaning up this
confusion would seem worth
Interesting debate, thanks for raising it.
Would we still need the same style of summit forum if we have the OpenStack
Community Working Gathering? One thing I have found with the forum running all
week throughout the summit is that it tends to draw audience away from other
talks so maybe we
Deleting all snapshots would seem dangerous though...
1. I want to reset my instance to how it was before
2. I'll just do a snapshot in case I need any data in the future
3. rebuild
4. oops
Tim
-Original Message-
From: Ben Nemec
Reply-To: "OpenStack Development
We’re using a combination of cASO (https://caso.readthedocs.io/en/stable/) and
some low level libvirt fabric monitoring. The showback accounting reports are
generated with merging with other compute/storage usage across various systems
(HTCondor, SLURM, ...)
It would seem that those who needed
Matt,
To add another scenario and make things even more difficult (sorry (), if the
original volume has snapshots, I don't think you can delete it.
Tim
-Original Message-
From: Matt Riedemann
Reply-To: "OpenStack Development Mailing List (not for usage
bject: Re: [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified
limit library
This is certainly a feature will make Public Cloud providers very happy :)
On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:
Sorry, I remember more detail
Labels can be one approach where you mount by disk label rather than device
Creating the volume with the label
# mkfs -t ext4 -L testvol /dev/vdb
/etc/fstab then contains
LABEL=testvol /mnt ext4 noatime,nodiratime,user_xattr0 0
You still need to be careful to not attach data disks
If you want to hide the VM signature, you can use the img_hide_hypervisor_id
property
(https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html)
Tim
-Original Message-
From: jon
Date: Tuesday, 16 January 2018 at 21:14
To: openstack-operators
BTW, this is also an end user visible change as the VMs would see the disk move
from /dev/vda to /dev/sda. Depending on how the VMs are configured, this may
cause issues also for the end user.
Tim
From: Jean-Philippe Méthot
Date: Thursday, 11 January 2018 at 08:37
We use Magnum at CERN to provide Kubernetes, Mesos and Docker Swarm on demand.
We’re running over 100 clusters currently using Atomic.
More details at
https://cds.cern.ch/record/2258301/files/openstack-france-magnum.pdf
Tim
From: Sergio Morales Acuña
Date: Wednesday, 22
Tom,
All the best for the future. I will happily share a beverage or two in Sydney,
reflect on the early days and toast the growth of the community that you have
been a major contributor to.
Tim
-Original Message-
From: Tom Fifield
Date: Wednesday, 4 October 2017
We use rebuild when reverting with snapshots. Keeping the same IP and hostname
avoids some issues with Active Directory and Kerberos.
Tim
-Original Message-
From: Clint Byrum
Date: Tuesday, 3 October 2017 at 19:17
To: openstack-operators
Has anyone had experience setting up a cluster of VM guests running Pacemaker /
Corosync? Any recommendations?
Tim
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
+1 for Boris’ suggestion. Many of us use Rally to probe our clouds and have
significant tooling behind it to integrate with local availability reporting
and trouble ticketing systems. It would be much easier to deploy new
functionality such as you propose if it was integrated into an existing
Allison,
Great to see in so many languages (although an incorrect flag seems to have
been used for entering English ☺
When I get to the deployments, I’m registered with 0 currently. In the past,
there was some ‘carry forward’ from previous surveys. I’m fine to put the data
in again if this
One scenario would be to change the default and allow the exceptions to opt out
(e.g. mysql-pymysql (
Tim
On 23.05.17, 19:08, "Matt Riedemann" wrote:
On 5/23/2017 11:38 AM, Sean McGinnis wrote:
>>
>> This sounds like something we could fix completely by
The Large Deployment Team meeting for ‘Plan the Week’
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18404/large-deployment-team-planning-the-week)
seems to be on Wednesday at 11h00 and the Recapping the week is the next slot
at 11h50
I think there will be quite a few ops folk… I can promise at least one ☺
Blair and I can also do a little publicity in
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18751/future-of-hypervisor-performance-tuning-and-benchmarking
which is on Tuesday.
Tim
From: Rochelle
:48, "Thierry Carrez" <thie...@openstack.org> wrote:
Tim Bell wrote:
> Do you know if the Forum sessions will be video’d?
As far as I know they won't (same as old Design/Ops summit sessions).
It's difficult to produce, with people all over the room and not
Tom,
Do you know if the Forum sessions will be video’d?
Tim
On 13.04.17, 05:52, "Tom Fifield" wrote:
Hello all,
The schedule for our the Forum is online:
https://www.openstack.org/summit/boston-2017/summit-schedule/#track=146
==> Session
That looks great.
Do we have dates for the Ops meetup?
Tim
From: Chris Morgan
Date: Tuesday, 11 April 2017 at 17:53
To: openstack-operators
Subject: [Openstack-operators] openstack operators meetups team meeting
2017-4-11
Today's
Some combination of spot/OPIE and Blazar would seem doable as long as the
resource provider reserves capacity appropriately (i.e. spot resources>>blazar
committed along with no non-spot requests for the same aggregate).
Is this feasible?
Tim
On 04.04.17, 19:21, "Jay Pipes"
For those that are interested in nested quotas, there is proposal on how to
address this forming in openstack-dev (and any comments on the review should be
made to https://review.openstack.org/#/c/363765).
This proposal has the benefits (if I can summarise) that
- Quota limits will be
> On 7 Mar 2017, at 11:52, Sean Dague wrote:
>
> One of the things that came out of the PTG was perhaps a new path
> forward on hierarchical limits that involves storing of limits in
> keystone doing counting on the projects. Members of the developer
> community are working
+1 for the WG summary and sharing priorities.
Equally, exploring how we can make use of common collaboration tools for all WG
would be beneficial.
There is much work to do to get the needs translated to code/doc/tools and it
would be a pity if we are not sharing to the full across WGs due to
I’m getting a too many redirect error on the user survey link.
Tim
On 01.02.17, 20:21, "Tom Fifield" wrote:
Hi all,
If you run OpenStack, please take a few minutes to respond to the latest
User Survey or pass it along to your friends.
This is the
bloomberg.net>
Date: Friday, 27 January 2017 at 16:49
To: openstack-operators <openstack-operators@lists.openstack.org>, Tim Bell
<tim.b...@cern.ch>
Subject: Re: [Openstack-operators] Delegating quota management for all projects
to a user without the admin role?
I did some deep ex
I think you could do something with policy.json to define which operations a
given role would have access to. We have used this to provide the centre
operator with abilities such as stop/start. The technical details are described
at
20:24
To: Tim Bell <tim.b...@cern.ch>
Cc: "m...@mattjarvis.org.uk" <m...@mattjarvis.org.uk>, openstack-operators
<openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] What would you like in Pike?
Hi Tim,
We did wonder in last week's meeting whether qu
we mange to usually be able to aggregaet
them in reporting, but deing able to show susage by a parent and
all it 's childeren woudl be very useful
3) Quota improvements -- this is important but we've learned to deal
with it
-Jon
On Sat, Jan 14, 2017 at 10:10:40AM +, Tim Bell wrote:
:The
> On 16 Jan 2017, at 18:00, Matt Riedemann <mrie...@linux.vnet.ibm.com> wrote:
>
> On 1/14/2017 4:10 AM, Tim Bell wrote:
>>
>> oManual interventions are often required to sync the current usage
>> with the OpenStack view
>
> See this spec in nova in
There are a couple of items which have not been able to make it to the top
priority for recent releases which would greatly simplify our day to day work
with the users and make the cloud more flexible. The background use cases are
described in
Can we also include a credit for the company that hosted the mid-cycle?
Tim
From: Lauren Sell
Date: Monday, 2 January 2017 at 15:01
To: Melvin Hillsman
Cc: openstack-operators
Subject: Re:
On 4 Nov 2016, at 06:31, Sam Morrison
> wrote:
On 4 Nov. 2016, at 1:33 pm, Emilien Macchi
> wrote:
On Thu, Nov 3, 2016 at 9:10 PM, Sam Morrison
> wrote:
> On 12 Oct 2016, at 07:00, Matt Riedemann wrote:
>
> The current form of the nova os-diagnostics API is hypervisor-specific, which
> makes it pretty unusable in any generic way, which is why Tempest doesn't
> test it.
>
> Way back when the v3 API was a thing for
It is a difficult calculation to make for me:
- Shared services like Facilities/Hardware repair/Network…
- Expectation for new functions since upstream has made them available
- The user support effort has increased significantly as more users
come online
-
Thanks. How’s the storage handled ?
We’re seeing slow I/O on local storage (which is also limited on space) and
latencies with Ceph for block storage.
Tim
From: <medbe...@gmail.com> on behalf of David Medberry <openst...@medberry.net>
Date: Friday 2 September 2016 at 22:18
To: Tim
Has anyone had experience running ElasticSearch on top of OpenStack VMs ?
Are there any tuning recommendations ?
Thanks
Tim
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
On 26 Aug 2016, at 17:44, Andrew Laski
> wrote:
On Fri, Aug 26, 2016, at 11:01 AM, John Griffith wrote:
On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski
> wrote:
On Fri, Aug 26, 2016, at 03:44
ded but from the user
perspective, the expectation would be that the quota is available once the user
request is completed (i.e. shelved). However, the resources are still being
used at this point until the offload time is reached.
Tim
On Fri, Aug 19, 2016 at 3:50 AM, Tim Bell
<t
x" <j...@csail.mit.edu> wrote:
On Thu, Aug 18, 2016 at 03:24:28PM +, Tim Bell wrote:
:
:We’re having a look at VM shelving for the CERN community and struggling
to find a motivation for a private cloud user to shelve their instances (and
free up resources they may be only us
We’re having a look at VM shelving for the CERN community and struggling to
find a motivation for a private cloud user to shelve their instances (and free
up resources they may be only using infrequently).
The problem is that shelved instances seem to still be included in the user’s
quota.
Has anyone a good recipe for improving I/O performance when the hypervisor has
SSDs ?
The configuration is CentOS 7 for guest and hypervisor with KVM.
Tim
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
When I last looked, Blazar allows you to reserve instances for a given time. An
example would be
- We are organizing a user training session for 100 physicists from Monday to
Friday
- We need that each user is able to create 2 VMs within a single shared project
(as the images etc. are set up
There are also some tools in the OSOps repository (Nova for example has
https://github.com/openstack/osops-tools-generic/tree/master/nova)
Tim
From: Abel Lopez
Date: Thursday 23 June 2016 at 00:03
To: Gilles Mocellin
Cc: openstack-operators
On 14/06/16 18:28, "Edgar Magana" wrote:
>Second that one! Feels like one of the best options, we are moving towards
>that direction.
>
>Edgar
>
For completeness, Rackspace had a project called Repose which did rate
limiting. Core is at
On 14/06/16 18:00, "Matt Riedemann" wrote:
>On 6/14/2016 10:14 AM, Kris G. Lindgren wrote:
>> Cern is running ceilometer at scale with many thousands of compute
>> nodes. I think their blog goes into some detail about it [1], but I
>> don’t have a direct link to it.
Sorry I was not able to make it. To clarify on Blazar for CERN
(http://eavesdrop.openstack.org/meetings/scientific_wg/2016/scientific_wg.2016-06-08-06.59.log.html),
we’re interested in the overall problem of how to maximize the utilization of
the resources in the private cloud rather than a
Slight concern on how to deploy on a RHEL system base as software collections
are non-trivial.
If we can keep the client to be still python 2.X compatible, that would be a
significant help.
Getting good development productivity/deployments should probably outweigh
these concerns though…
Tim
On 25/05/16 17:36, "Sean Dague" <s...@dague.net> wrote:
>On 05/23/2016 10:24 AM, Tim Bell wrote:
>>
>>
>> Quick warning for those who are dependent on the "user_id:%(user_id)s"
>> syntax for limiting actions by user. According to
>>
On 23/05/16 17:02, "Sean Dague" <s...@dague.net> wrote:
>On 05/23/2016 10:24 AM, Tim Bell wrote:
>>
>>
>> Quick warning for those who are dependent on the "user_id:%(user_id)s"
>> syntax for limiting actions by user. According to
>>
Quick warning for those who are dependent on the "user_id:%(user_id)s" syntax
for limiting actions by user. According to
https://bugs.launchpad.net/nova/+bug/1539351, this behavior was apparently not
intended according to the bug report feedback. The behavior has changed from v2
to v2.1 and
On 13/05/16 19:48, "Joshua Harlow" wrote:
>Matthew Thode wrote:
>> On 05/12/2016 04:04 PM, Joshua Harlow wrote:
>>> Hi there all-ye-operators,
>>>
>>> I am investigating how to help move godaddy from rpms to a
>>> container-like solution (virtualenvs, lxc, or docker...)
On 12/05/16 06:22, "Stig Telfer" wrote:
>Hi All -
>
>Jim Rollenhagen from the Ironic project has just posted a great summit report
>of Ironic team activities on the openstack-devs mailing list[1], which
>included this item which will be of interest to the
Does anyone see a good way to fix this to report KVM or QEMU/KVM ?
I guess the worry is whether this would count as a bug fix or an incompatible
change.
Tim
On 11/05/16 17:51, "Kashyap Chamarthy" wrote:
>On Tue, May 03, 2016 at 02:27:00PM -0500, Sergio Cuellar Valdes
Following the discussions last week, I have put down a blog on how CERN does
it’s resource management for the accounting team on the Scientific Working
group. The areas we looked at were
1. Lustre-as-a-Service in Manila
2. Bare metal management
3. Accounting
4. User Stories and
The overall requirements are being reviewed in
https://etherpad.openstack.org/p/AUS-ops-Nova-maint. A future tool may make its
way in OSOps but I think we should keep the requirements discussion distinct
from the available community tools and their tool repository.
Tim
From: Joseph Bajin
Looking at projects like Barbican in the
https://www.openstack.org/software/releases/liberty/components/barbican, they
seem to have met some conditions which are currently marked as ‘no’.
What is the mechanism to query these settings (either manual or automatic) ?
As an example, there is
On 21/03/16 17:24, "Markus Zoeller" wrote:
>Hello dear ops,
>
>I'd like to make you aware of discussion [1] on the openstack-dev ML.
>I'm in the role of maintaining the bug list in Nova and was looking
>for a way to gain an overview again over our ~950 open bug reports.
>My
From: joe >
Date: Monday 7 March 2016 at 07:53
To: openstack-operators
>
Subject: Re: [Openstack-operators] RAID / stripe block storage volumes
We ($work) have been
On 04/03/16 16:40, "gordon chung" wrote:
>> One part of the documentation set that we were missing was a guide to how to
>> migrate from ceilometer to a ceilometer/gnocchi combination (which I
>> understand is the ultimate architecture). We would like to migrate the
>>
There is a better explanation in the OpenStack docs :-) with an example how to
get 50% slower (not something my users ask for often)
cpu_quota & cpu_period seems to give it according to
http://docs.openstack.org/admin-guide-cloud/compute-flavors.html
Tim
On 04/03/16 15:59, "Jonathan
From: Matt Joyce >
Date: Thursday 3 March 2016 at 18:35
To: Robert Starmer >
Cc: openstack-operators
>,
We’re starting to have a look at gnocchi in order to address the large storage
volumes. We plan on using the Ceph backend for storage.
One part of the documentation set that we were missing was a guide to how to
migrate from ceilometer to a ceilometer/gnocchi combination (which I understand
Just to check, does OpenStack run on a Raspberry Pi ? Could cause some negative
comments if it was
not compatible/sized for a basic configuration.
Tim
On 01/03/16 20:41, "Thomas Goirand" wrote:
>On 03/01/2016 11:30 PM, Tom Fifield wrote:
>> Excellent, excellent.
>>
>>
We’ve generally done staged upgrades (process documented in
http://openstack-in-production.blogspot.com). For a smoother migration, we have
split the changes to the service catalog to the latest API versions (e.g.
Keystone V2 to V3) from the code upgrade. This allows you to have an easier
How fast can be measured in a variety of ways:
- How quickly a VM can be spawned and become available ?
- How quickly it runs once it is available ? How many X workunits/second can be
achieved with N cores ?
Rally will help with the 1st case. For the second case, it is important to
choose an
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: 24 November 2015 17:56
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] Operational Director?
>
> Edgar Magana wrote:
> > Is the Foundation aware of that? I mean you
Multiple meetups in parallel does make it more difficult to get the PTLs and
product working group involved. There have been many benefits from their work
with operators and defining the roadmaps.
It may be that not everyone can attend but there is also the opportunity for
those who have
On 25/10/15 23:57, "Daniel P. Berrange" <berra...@redhat.com> wrote:
>On Sat, Oct 24, 2015 at 07:42:53AM +, Tim Bell wrote:
>> Always having ephemeral disks as raw might make sense given that there
>> is going to be little overlap with others (and
> -Original Message-
> From: Kashyap Chamarthy [mailto:kcham...@redhat.com]
> Sent: 23 October 2015 12:37
> To: Tim Bell <tim.b...@cern.ch>
> Cc: Marc Heckmann <marc.heckm...@ubisoft.com>; openstack-
> operat...@lists.openstack.org
> Subject: Re: [Openstack-operato
Has anyone had experience with setting up Nova with KVM so it has raw
ephemeral disks but qcow2 images for the VMs ? We've got very large
ephemeral disks and could benefit from the performance of raw volumes for
this.
Tim
smime.p7s
Description: S/MIME cryptographic signature
> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: 07 October 2015 13:02
> To: Sean Dague
> Cc: OpenStack Development Mailing List (not for usage questions)
> ; openstack-
> operat...@lists.openstack.org
> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: 07 October 2015 13:25
> To: Tim Bell <tim.b...@cern.ch>
> Cc: Sean Dague <s...@dague.net>; OpenStack Development Mailing List
> (not for usage questions) <openstack-..
CERN do the same…. The memcache functions on keystone are very useful for
scaling it up.
Tim
From: Matt Fischer [mailto:m...@mattfischer.com]
Sent: 28 September 2015 18:51
To: Curtis
Cc: openstack-operators@lists.openstack.org; Jonathan Proulx
There seems to be a lot of overlap with the user survey which has just
finished.
Feel free to get in touch on the user-commit...@lists.openstack.org if you
have questions to suggest to the survey or would like specific queries to be
run on the anonymised data.
There is a significant risk of
Piet,
There seems to be a lot of overlap with the user survey which is currently
running. This also includes questions for those running nova network and
these are also reviewed in the operators summit sessions and mid-cycle
meetups.
There is a significant risk of over surveying the operator
I’d like Joe there too :) Can we re-schedule that one ?
Tim
From: Joe Topjian [mailto:j...@topjian.net]
Sent: 23 September 2015 06:03
To: Tom Fifield
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Tokyo Summit Ops Design Summit Tracks -
The instance action list is a very useful tool for our end users to see what
has gone on with their VMs. Working with people in teams means that
sometimes one person does not get to hear what another one has done and so
raises helpdesk cases like "one of the VMs was rebooted".
Are there
We could probably find one or more of the CERN operations team to come along
too from the 3 OpenStack clouds we’ve got here.
Tim
From: Matt Jarvis [mailto:matt.jar...@datacentred.co.uk]
Sent: 17 September 2015 15:09
To: Neil Jerram
Cc:
Would an OpenStack cinder volume meet your needs ?
Tim
From: David Arroyo [mailto:dr...@aqwari.net]
Sent: 29 August 2015 17:17
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] Passing entire disks to instances
Hello,
I would like to pass entire disks to my openstack
Improving the QA test suite in multi-region would allow us to catch cases where
a change has been made without considering the implications of multi-region
support.
It would not however identify cases such as missing command line options. Given
that the test suite identifies a set of
FYI, a few suggestions on tuning CPU bound workloads with KVM at
http://openstack-in-production.blogspot.fr/2015/08/kvm-and-hyper-v-comparison-for-high.html.
The Kilo enhancements looks to be a great help.
Tim
___
OpenStack-operators mailing list
Feel free to give input on the Mitaka proposal.
Tim
-Original Message-
From: Jonathan Bryce [mailto:jbr...@jbryce.com]
Sent: 09 July 2015 20:52
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack] Rescinding the M name decision
Following on from the HPC Ops meetup in Vancouver
(https://etherpad.openstack.org/p/YVR-ops-hpc), Alvaro has been working on the
spot market proposal for the nova backlog.
Comments are welcome on https://review.openstack.org/#/c/104883/
Tim
___
-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 05 June 2015 21:06
To: openstack-operators
Subject: Re: [Openstack-operators] [Tags] Tags Team Repo our first tags!
Excerpts from Tim Bell's message of 2015-06-05 11:34:02 -0700:
But if there is one package
But if there is one package out of all of the OS options, does that make true
or false ? Or do we have a rule that says a 1 means that at least CentOS and
Ubuntu are packaged ?
I remain to be convinced that a 0 or 1 can be achieved within the constraints
that we need something which is useful
I had understood that CentOS 7.1 qemu-kvm has RBD support built-in. It was not
there on 7.0 but http://tracker.ceph.com/issues/10480 implies it is in 7.1.
You could check on the centos mailing lists to be sure.
Tim
From: Cynthia Lopes [mailto:clsacrame...@gmail.com]
Sent: 02 June 2015 10:57
-Original Message-
From: Christian Schwede [mailto:cschw...@redhat.com]
Sent: 28 May 2015 20:03
To: George Shuklin; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] 100% CPU and hangs if syslog is restarted
On 28.05.15 18:56, George Shuklin wrote:
Today
Joe,
Thanks for the notes.
We had a productive discussion with the Glance folk on how to share images
across clouds
(https://libertydesignsummit.sched.org/event/6b4a5dbd177cde2aad7a9927a82534d0#.VWDLPpOqqko)
and we’ll be working on that spec.
We also had some forward looking discussions with
1 - 100 of 126 matches
Mail list logo