Hello folks,
Can digest delivery please be enabled on the new openstack list? I was so
pleased to hear this was getting moved, specifically because of the missing
digest functionality on launchpad, but now Mailman is thwarting my attempts
to select digest delivery...
The list administrator has
Great, thank you.
On 06/08/2013 11:20 AM, Stefano Maffulli stef...@openstack.org wrote:
On Mon 05 Aug 2013 05:59:15 PM PDT, Blair Bethwaite wrote:
However, I think it still needs some tweaking... With digest mode
enabled I'm get 10+ digests per day, often with just a few messages in
them
I can confirm https://github.com/cloudbase/cloudbase-init works with Server
2012 (haven't tried anything else yet) on a Havana cloud. Randomly
generates a password on first boot, uses the public key to encrypt it, PUTS
it back to the metadata service, can be queried and decrypted by the nova
On 30 June 2016 at 05:17, Gustavo Randich wrote:
>
> - other?
FWIW, the other approach that might be suitable (depending on your
project/tenant isolation requirements) is simply using a flat provider
network (or networks, i.e., VLAN per project) within your existing
On 29 March 2017 at 18:10, Tom Fifield <t...@openstack.org> wrote:
> On 29/03/17 14:10, Blair Bethwaite wrote:
>> Just to confirm - the Forum will run Monday through Thursday, and
>> presumably the session scheduling will be flexible to meet the needs of
>> the leads/
Hi Melvin,
Just to confirm - the Forum will run Monday through Thursday, and
presumably the session scheduling will be flexible to meet the needs of the
leads/facilitators?
Cheers,
b1airo
On 21 Mar. 2017 6:56 am, "Melvin Hillsman" wrote:
> Hey everyone!
>
> We have made
On 20 November 2014 05:25, openstack-dev-requ...@lists.openstack.org wrote:
--
Message: 24
Date: Wed, 19 Nov 2014 10:57:17 -0500
From: Doug Hellmann d...@doughellmann.com
To: OpenStack Development Mailing List (not for usage questions)
Hi all,
I've just been doing some user consultation and pondering a case for
use of the Qemu Guest Agent in order to get quiesced backups.
In doing so I found myself wondering why on earth I need to set an
image property in Glance (hw_qemu_guest_agent) to toggle such a thing
for any particular
Hi there,
We've been investigating some guest filesystem issues recently and noticed
what looks like a slight inconsistency in base image handling in
block-migration. We're on Grizzly from the associated Ubuntu cloud archive
and using qcow on local storage.
What we've noticed is that after
Hi all,
I'm trying to wrap my head around whether it's possible with the existing
scheduler filters, to put a limit per host on the number of instances per
instance-type/flavor? I don't think this is possible with the existing
filters or weights, but it seems like a fairly common requirement.
Greetings fellow operators,
I'm excited to be moderating the Vancouver ops summit session on hypervisor
tuning (etherpad over here:
https://etherpad.openstack.org/p/YVR-ops-hypervisor-tuning). I hope we can
gather some useful new information for the ops guide and perhaps even share
a few
Hi all,
Question up-front:
Do the performance characteristics of modern PCIe attached SSDs
invalidate/challenge the old don't overcommit memory with KVM wisdom
(recently discussed on this list and at meetups and summits)? Has
anyone out there tried tested this?
Long-form:
I'm currently
, but that it needed testing.
I'm happy to help test this out. Sounds like the results could be part of a
Tokyo talk :P
Warren
Warren
On Mon, Jun 29, 2015 at 9:36 AM, Blair Bethwaite blair.bethwa...@gmail.com
wrote:
Hi all,
Question up-front:
Do the performance characteristics of modern PCIe
Hi all,
Quick reminder that the scientific-wg IRC weekly meeting is on in <6
hours in #openstack-meeting! Details below.
We're planning to talk about status and use-cases for Blazar, plus
continued discussion/brain-storming on use-cases pertaining to our
focus areas. On the latter point this
Finally circled back to this thread...
Joe - those are great notes!
On 12 May 2016 at 02:51, Joe Topjian wrote:
> * I found that I didn't have to use EFI-based images. I wonder why that is?
Yeah, we've never run into this as a requirement either.
Peter - can you clarify?
Hi Nikhil,
2000UTC might catch a few kiwis, but it's 6am everywhere on the east
coast of Australia, and even earlier out west. 0800UTC, on the other
hand, would be more sociable.
On 26 May 2016 at 15:30, Nikhil Komawar wrote:
> Thanks Sam. We purposefully chose that time
Hi Nikhil,
2000UTC might catch a few kiwis, but it's 6am everywhere on the east
coast of Australia, and even earlier out west. 0800UTC, on the other
hand, would be more sociable.
On 26 May 2016 at 15:30, Nikhil Komawar wrote:
> Thanks Sam. We purposefully chose that time
Hi Tim,
Firstly, thank-you for reading the logs and following up - it's great
to have further discussion generated!
On 2 June 2016 at 03:07, Tim Randles wrote:
> Sorry I wasn't able to make yesterday's Scientific Working Group IRC
> meeting. The discussion looked very
Hi all,
Currently the scientific-wg is meeting weekly on irc with alternating times
week to week - 2100 UTC Tues this week and 0700 UTC Weds next week. The
basic idea being to have both US and non-US friendly times.
The former time is pretty well attended but the latter is somewhat hit and
miss
Hi all,
numad provides dynamic and advisory NUMA tuning. It monitors NUMA
topology and cpu/memory usage within a system and dynamically tunes
NUMA and CPU affinity and/or provides process pre-placement advice to
management tools like libvirt. If you've never tried it, it works very
well and can
Hi all,
Apologies if you receive multiple copies. I've BCC'd everyone who left
addresses in the etherpads.
Firstly, thank-you for attending the inaugural scientific-wg meeting!
The turnout and discussion was excellent. If you're interested in
being an active member or observer of the
On 1 August 2016 at 13:30, Marcus Furlong wrote:
> Looks like there is a bug open which suggests that it should be using
> RPC calls, rather than commands executed over ssh:
>
> https://bugs.launchpad.net/nova/+bug/1459782
I agree, no operator in their right mind wants to
ng whether anyone else has been down this path yet?
Cheers,
On 20 July 2016 at 12:57, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
> Thanks for the confirmation Joe!
>
> On 20 July 2016 at 12:19, Joe Topjian <j...@topjian.net> wrote:
>> Hi Blair,
>>
>>
Sounds like a recipe for confusion?
On 1 August 2016 at 10:23, Steven Dake (stdake) wrote:
>
>
> On 7/31/16, 7:13 AM, "Jay Pipes" wrote:
>
>>On 07/29/2016 11:35 PM, Steven Dake (stdake) wrote:
>>> Hey folks,
>>>
>>> In Kolla we have a significant bug in
On 4 August 2016 at 12:48, Sam Morrison wrote:
>
>> On 4 Aug 2016, at 3:12 AM, Kris G. Lindgren wrote:
>>
>> We do something similar. We give everyone in the company an account on the
>> internal cloud. By default they have a user- project. We have
le currently being replicated). Since 2.7, Swift take care of that:
> https://github.com/openstack/swift/blob/master/CHANGELOG#L226
>
>
>
> Le Mercredi 20 Juillet 2016 10:17 CEST, Blair Bethwaite
> <blair.bethwa...@gmail.com> a écrit:
>
>> Hi all,
>>
>> As
On 30 June 2016 at 05:17, Gustavo Randich wrote:
>
> - other?
FWIW, the other approach that might be suitable (depending on your
project/tenant isolation requirements) is simply using a flat provider
network (or networks, i.e., VLAN per project) within your existing
w and have had
> several users report success.
>
> Thanks,
> Joe
>
> On Tue, Jul 19, 2016 at 5:06 PM, Blair Bethwaite <blair.bethwa...@gmail.com>
> wrote:
>>
>> Hilariously (or not!) we finally hit the same issue last week once
>> folks actually started trying
Thu, Jul 07, 2016 at 11:13:29AM +1000, Blair Bethwaite wrote:
> :Jon,
> :
> :Awesome, thanks for sharing. We've just run into an issue with SRIOV
> :VF passthrough that sounds like it might be the same problem (device
> :disappearing after a reboot), but haven't yet investigated de
Hi all,
As per the subject, wondering where these files come from, e.g.,:
root@stor010:/srv/node/sdc1/objects# ls -la
./109794/359/6b389b24749b7046344ffd2a42aab359
total 1195784
drwxr-xr-x 2 swift swift 4096 Jun 8 04:11 .
drwxr-xr-x 3 swift swift53 May 22 05:05 ..
-rw--- 1
Hi all,
Just pondering summit talk submissions and wondering if anyone else
out there is interested in participating in a HPFS panel session...?
Assuming we have at least one person already who can cover direct
mounting of Lustre into OpenStack guests then it'd be nice to find
folks who have
Hi all,
Scientific-WG regular meeting is on soon, draft agenda below and at
https://wiki.openstack.org/wiki/Scientific_working_group.
2016-07-12 2100 UTC in channel #openstack-meeting
# Review of Activity Areas and opportunities for progress
## Bare metal
### Networking
Hi Roland -
GUTS looks cool! But I took Michael's question to be more about
control plane data than end-user instances etc...?
Michael - If that's the case then you probably want to start with
dumping your present Juno DBs, importing into your Mitaka test DB and
then attempting the migrations to
Hi Álvaro, hi David -
NB: adding os-ops.
David, we have some real-time Lustre war stories we can share and
hopefully provide some positive conclusions to come Barcelona. I've
given an overview of what we're doing below. Are there any specifics
you were interested in when you raised Lustre in the
Jon,
Awesome, thanks for sharing. We've just run into an issue with SRIOV
VF passthrough that sounds like it might be the same problem (device
disappearing after a reboot), but haven't yet investigated deeply -
this will help with somewhere to start!
By the way, the nouveau mention was because
Hi Jon,
Do you have the nouveau driver/module loaded in the host by any
chance? If so, blacklist, reboot, repeat.
Whilst we're talking about this. Has anyone had any luck doing this
with hosts having a PCI-e switch across multiple GPUs?
Cheers,
On 6 July 2016 at 23:27, Jonathan D. Proulx
llcome Trust Genome Campus, Hinxton, Cambridge, CB10 1SD, UK
>>> Email: da...@ebi.ac.uk
>>>
>>>> On 28 Jun 2016, at 13:42, <alexander.di...@stfc.ac.uk>
>>>> <alexander.di...@stfc.ac.uk> wrote:
>>>>
>>>> 0900 would work bette
Hi all -
We have a meeting coming up in about 12 hours:
2017-02-15 0900 UTC in channel #openstack-meeting
This is substantively a repeat of last week's agenda for alternate timezones
- Boston Declaration update from Martial
- Hypervisor tuning update from Blair
- Blair's experiences with RoCE
Hi Tim,
We did wonder in last week's meeting whether quota management and nested
project support (particularly which flows are most important) would be a
good session for the Boston Forum...? Would you be willing to lead such a
discussion?
Cheers,
On 19 January 2017 at 19:59, Tim Bell
We discussed Blazar fairly extensively in a couple of recent
scientific-wg meetings. I'm having trouble searching out right the irc
log to support this but IIRC the problem with Blazar as is for the
typical virtualised cloud (non-Ironic) use-case is that it uses an
old/deprecated Nova API
Following on from Edmund's issues... People talking about doing this
typically seem to cite cgroups as the way to avoid CPU and memory
related contention - has anyone been successful in e.g. setting up
cgroups on a nova qemu+kvm hypervisor to limit how much of the machine
nova uses?
On 1
s session or two. Also I think it could be discussed in the
> Nova section. As a stretch, we could cover on lightning talks. There is also
> the Friday work sessions. So I think plenty of options. We also have at
> least three placeholder sessions.
>
>
> On Oct 4, 2016 11:28 PM, &quo
Nice! But I'm curious, why the need to migrate?
On 5 October 2016 at 13:29, Xav Paice wrote:
> On Wed, 2016-10-05 at 13:28 +1300, Xav Paice wrote:
>> On Tue, 2016-10-04 at 17:48 -0600, Curtis wrote:
>> > Maybe you have someone on staff who loves writing lua (for haproxy)? :)
; SOC = Path traverses a socket-level link (e.g. QPI)
> PHB = Path traverses a PCIe host bridge
> PXB = Path traverses multiple PCIe internal switches
> PIX = Path traverses a PCIe internal switch
>
>
> Cheers,
> Andrew
>
>
> Andrew J. Younge
> School of Informa
Hi all,
I've just had a look at this with a view to adding 10 minutes
somewhere on what to do with the hypervisor tuning guide, but I see
the free-form notes on the etherpad have been marked as "Old", so
figure it's better to discuss here first... Could maybe fit under
ops-nova or ops-hardware?
Hi Matt,
At considerable risk of heading down a rabbit hole... how are you defining
"public" cloud for these purposes?
Cheers,
Blair
On 21 September 2016 at 18:14, Matt Jarvis
wrote:
> Given there are quite a few public cloud operators in Europe now, is there
>
Hi Stig,
When you say IB are you specifically talking about link-layer, or more the
RDMA capability and IB semantics supported by the drivers and APIs (so both
native IB and RoCE)?
Cheers,
On 17 Aug 2016 2:28 AM, "Stig Telfer" wrote:
> Hi All -
>
> I’m looking for
> going with this that some of the science clouds share some of the
> attributes above ?
>
> Matt
>
> On 22 September 2016 at 00:40, Blair Bethwaite <blair.bethwa...@gmail.com>
> wrote:
>
>> Hi Matt,
>>
>> At considerable risk of heading down a rabbit ho
sample fails when
checking their ability to communicate with each other. Is there some
magic config I might be missing, did you need to make any PCI-ACS
changes?
Best regards,
Blair
On 16 March 2016 at 07:57, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
>
> Hi Andrew,
>
> On 1
that allows reset of the
guest, is not desirable.
On 13 October 2016 at 04:37, Blair Bethwaite <blair.bethwa...@gmail.com>
wrote:
> Hi all,
>
> Does anyone know whether there is a way to disable the novnc console on a
> per instance basis?
>
> Cheers,
> Bla
Devil's advocate - what is "full enough"? Surely another channel is
essentially free and having flexibility in available timing is of utmost
importance?
On 8 Nov 2016 5:37 PM, "Tony Breeds" wrote:
> On Mon, Nov 07, 2016 at 05:52:43PM +0100, lebre.adr...@free.fr wrote:
>
Devil's advocate - what is "full enough"? Surely another channel is
essentially free and having flexibility in available timing is of utmost
importance?
On 8 Nov 2016 5:37 PM, "Tony Breeds" wrote:
> On Mon, Nov 07, 2016 at 05:52:43PM +0100, lebre.adr...@free.fr wrote:
>
On 10 Nov 2016 8:56 PM, "Thierry Carrez" wrote:
>
> The issue with this solution is that any "monthly" slot ends up being
> exactly the same as a "weekly" slot: you can't schedule any
> weekly/biweekly meetings at the same time and location. Monthly meetings
> are therefore
Hi Adam,
I agree somewhat, capacity management and growth at scale is something
of a pain. Ceph gives you a hugely powerful and flexible way to manage
data-placement through crush but there is very little quality info
about, or examples of, non-naive crushmap configurations.
I think I understand
or your institution have been implementing some
> bright ideas that take OpenStack into new territory for research computing
> use cases, lets hear it!
>
> Please follow up to me and Blair (Scientific WG co-chairs) if you’re
> interested in speaking and would like to bag a slot.
>
Hi folks,
There's a superuser blog live now detailing OpenStack-related
goings-on at SC this week:
http://superuser.openstack.org/articles/openstack-supercomputing-2016/
Cheers,
--
Blair Bethwaite
Senior HPC Consultant
Monash eResearch Centre
Monash University
Room G26, 15 Innovation Walk
Hi all -
In the fashion of JITS (just in time scheduling), we OpenStack+HPC folk are
planning to converge at P.F. Chang's (https://goo.gl/maps/YN8v26CBctp
- West 300 South) at 8.30pm following on from the technical programme
reception.
Hope to see you there!
Cheers,
Blair
t;t...@bakeyournoodle.com> wrote:
> On Fri, Nov 11, 2016 at 03:10:04PM +0100, Thierry Carrez wrote:
>> Blair Bethwaite wrote:
>> > On 10 Nov 2016 8:56 PM, "Thierry Carrez" <thie...@openstack.org
>> > <mailto:thie...@openstack.org>> wrote:
>> >>
&g
On 27 October 2016 at 16:02, Jonathan D. Proulx wrote:
> don't put a getty on the TTY :)
Do you know how to do that with Windows? ...you can see the desire for
sandboxing now :-).
--
Cheers,
~Blairo
___
OpenStack-operators
Hi George,
On 27 October 2016 at 16:15, George Mihaiescu wrote:
> Did you try playing with Nova's policy file and limit the scope for
> "compute_extension:console_output": "" ?
No, interesting idea though... I suspect it's actually the
get_*_console policies we'd need to
Lol! I don't mind - Microsoft do support and produce some pretty good
research, I just wish they'd fix licensing!
On 27 October 2016 at 16:11, Jonathan D. Proulx <j...@csail.mit.edu> wrote:
> On Thu, Oct 27, 2016 at 04:08:26PM +0200, Blair Bethwaite wrote:
> :On 27 October 2016 at 16:
Hi all,
Does anyone know whether there is a way to disable the novnc console on a
per instance basis?
Cheers,
Blair
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
On 5 January 2017 at 19:47, Rui Chen wrote:
> Ah, Adam, got your point, I found two related Nova blueprints that were
> similar with your idea,
> but there are not any activities about them from 2014, I hadn't dive deep
> into these comments,
> you might get some
Hi Adam,
On 5 January 2017 at 08:48, Adam Lawson wrote:
> Just a friendly bump. To clarify, the ideas being tossed around are to host
> QCOW images on each Compute node so the provisioning is faster (i.e. less
> dependency on network connectivity to a shared back-end). I need
Hi Conrad,
On 20 December 2016 at 09:24, Kimball, Conrad wrote:
> · Dedicated instances: an OpenStack tenant can deploy VM instances
> that are guaranteed to not share a compute host with any other tenant (for
> example, as the tenant I want physical
Hi all,
Does anyone have any recommendations for good tools to perform
file-system/tree backups and restores to/from a (Ceph RGW-based)
object store (Swift or S3 APIs)? Happy to hear about both FOSS and
commercial options please.
I'm interested in:
1) tools known to work or not work at all for a
Could just avoid Glance snapshots and indeed Nova ephemeral storage
altogether by exclusively booting from volume with your ITAR volume type or
AZ. I don't know what other ITAR regulations there might be, but if it's
just what JM mentioned earlier then doing so would let you have ITAR and
non-ITAR
bug:
> http://tracker.ceph.com/issues/19056
>
> is anyone else hitting this ?
>
> Saverio
>
> 2017-03-27 22:11 GMT+02:00 John Dickinson <m...@not.mn>:
> >
> >
> > On 27 Mar 2017, at 4:39, Blair Bethwaite wrote:
> >
> >> Hi all,
> >>
Hi all -
We have a Scientific WG IRC meeting coming up in a few hours (Wednesday at
0900 UTC) in channel #openstack-meeting. All welcome.
The agenda has one simple goal:
Follow-up on and finalise Boston Forum proposals and assign leaders to
submit.
Cheers,
--
Blair Bethwaite
Senior HPC
Hi Melvin,
Just to confirm - the Forum will run Monday through Thursday, and
presumably the session scheduling will be flexible to meet the needs of the
leads/facilitators?
Cheers,
b1airo
On 21 Mar. 2017 6:56 am, "Melvin Hillsman" wrote:
> Hey everyone!
>
> We have made
Hi Chris,
On 17 Mar. 2017 15:24, "Chris Friesen" <chris.frie...@windriver.com> wrote:
On 03/16/2017 07:06 PM, Blair Bethwaite wrote:
Statement: breaks bin packing / have to match flavor dimensions to hardware
> dimensions.
> Comment: neither of these ring true to me giv
Dims, it might be overkill to introduce multi-Keystone + federation (I just
quickly skimmed the PDF so apologies if I have the wrong end of it)?
Jon, you could just have multiple cinder-volume services and backends. We
do this in the Nectar cloud - each site has cinder AZs matching nova AZs.
By
On 22 March 2017 at 13:33, Jonathan Mills wrote:
>
> To what extent is it possible to “lock” a tenant to an availability zone,
> to guarantee that nova scheduler doesn’t land an ITAR VM (and possibly the
> wrong glance/cinder) into a non-ITAR space (and vice versa)…
>
Yes,
There have been previous proposals (and if memory serves, even some
blueprints) for API extensions to allow this but they have apparently
stagnated. On the face of it I think OpenStack should support this (more
choice = win!) - doesn't mean that every cloud needs to use the feature. Is
it worth
of
resources (volumes, floating IPs, etc.). Software licenses can be
another type.
==
(https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg)
Cheers,
--
Blair Bethwaite
Senior HPC Consultant
Monash eResearch Centre
Monash University
Room G26, 15 Innovation Walk, Clayton Campus
Clayton
Hi Jay,
On 5 April 2017 at 03:21, Jay Pipes <jaypi...@gmail.com> wrote:
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
>> That's something of an oversimplification. A reservation system
>> outside of Nova could manipulate Nova host-aggregates to "cordon off"
&
requests for the same
> aggregate).
>
> Is this feasible?
>
> Tim
>
> On 04.04.17, 19:21, "Jay Pipes" <jaypi...@gmail.com> wrote:
>
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> > Hi Jay,
> >
> > On 4 April 2017 at 00:20, Jay
1st_2017
> [2] http://eavesdrop.openstack.org/#Scientific_Working_Group
>
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>
--
Blair Bethwaite
Sen
Hi Jay,
On 4 April 2017 at 00:20, Jay Pipes wrote:
> However, implementing the above in any useful fashion requires that Blazar
> be placed *above* Nova and essentially that the cloud operator turns off
> access to Nova's POST /servers API call for regular users. Because if
7 at 07:55, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
> Hi all,
>
> I've been (very slowly) working on some docs detailing how to setup an
> OpenStack Nova Libvirt+QEMU-KVM deployment to provide GPU-accelerated
> instances. In Boston I hope to chat to some of
Hi Alex,
I just managed to take a half hour to look at this and have a few
questions/comments towards making a plan for how to proceed with
moving the Ops Guide content to the wiki...
1) Need to define wiki location and structure. Curiously at the moment
there is already meta content at
Please don't make these 400s - it should not be a client error to be
unaware of the service status ahead of time.
On 12 July 2017 at 11:18, Matt Riedemann wrote:
> I'm looking for some broader input on something being discussed in this
> change:
>
>
On 27 June 2017 at 23:47, Thierry Carrez wrote:
> Setting up a common ML for common discussions (openstack-sigs) will
> really help, even if there will be some pain setting them up and getting
> the right readership to them :)
It's worth a try! I agree it will probably
On 27 June 2017 at 23:47, Sean Dague wrote:
> I still think I've missed, or not grasped, during this thread how a SIG
> functions differently than a WG, besides name. Both in theory and practice.
I think for the most part SIG is just a more fitting moniker for some
of these
Resend for openstack-dev with proper list perms...
-- Forwarded message --
From: Blair Bethwaite <blair.bethwa...@monash.edu>
Date: 27 June 2017 at 23:24
Subject: [scientific] IRC Meeting (Tues 2100 UTC): Science app catalogues,
network security of research computing on Ope
; Stig
>
>
> On 21 Jun 2017, at 09:13, Blair Bethwaite <blair.bethwa...@monash.edu>
> wrote:
>
> Thanks Pierre. That's also my preference.
>
> Just to be clear, today's 0900 UTC meeting (45 mins from now) is going
> ahead at the usual time.
>
> On 21 Jun. 2
We at Nectar are in the same boat as Mike. Our use-case is a little
bit more about geo-distributed operations though - our Cells are in
different States around the country, so the local glance-apis are
particularly important for caching popular images close to the
nova-computes. We consider these
We at Nectar are in the same boat as Mike. Our use-case is a little
bit more about geo-distributed operations though - our Cells are in
different States around the country, so the local glance-apis are
particularly important for caching popular images close to the
nova-computes. We consider these
Hi all,
A quick FYI that this Forum session exists:
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.
It would be great to see a good representation from both the Nova
Hi all,
A quick FYI that this Forum session exists:
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.
It would be great to see a good representation from both the Nova
Hey all,
Hopefully you've all noticed this by now, the timing of the WG
sessions (lightening talks, meeting, BoF) has changed a little since
first published. I've just updated the etherpad to reflect that now:
https://etherpad.openstack.org/p/Scientific-WG-boston
Tues 11:15am - 11:55am -
Morning all -
Apologies for the shotgun email. But looks like we still have one or
two spots available for lightening talks if anyone has work they want
to share and/or discuss:
https://etherpad.openstack.org/p/Scientific-WG-Boston-Lightning
Best regards,
On 28 April 2017 at 06:19, George
Hi all,
The Scientific-WG's 0900 UTC meeting time (it's the non-US friendly time)
is increasingly difficult for me to make. A couple of meetings back we
discussed changing it and had general agreement. The purpose here is to get
a straw poll of preferences for -2 or +2 to the current time, i.e.,
Thanks Pierre. That's also my preference.
Just to be clear, today's 0900 UTC meeting (45 mins from now) is going
ahead at the usual time.
On 21 Jun. 2017 5:21 pm, "Pierre Riteau" <prit...@uchicago.edu> wrote:
Hi Blair,
I strongly prefer 1100 UTC.
Pierre
> On 21 Jun 20
Hi Alex,
On 2 June 2017 at 23:13, Alexandra Settle wrote:
> O I like your thinking – I’m a pandoc fan, so, I’d be interested in
> moving this along using any tools to make it easier.
I can't realistically offer much time on this but I would be happy to
help (ad-hoc)
Hi Alex,
On 2 June 2017 at 23:13, Alexandra Settle wrote:
> O I like your thinking – I’m a pandoc fan, so, I’d be interested in
> moving this along using any tools to make it easier.
I can't realistically offer much time on this but I would be happy to
help (ad-hoc)
On 23 May 2017 at 05:33, Dan Smith wrote:
> Sure, the diaper exception is rescheduled currently. That should
> basically be things like misconfiguration type things. Rescheduling
> papers over those issues, which I don't like, but in the room it surely
> seemed like operators
Thanks Jay,
I wonder whether there is an easy-ish way to collect stats about the
sorts of errors deployers see in that catchall, so that when this
comes back around in a release or two there might be some less
anecdotal data available...?
Cheers,
On 24 May 2017 at 06:43, Jay Pipes
Hi Alex,
Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?
On 2 June 2017 at 11:15, George Mihaiescu wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle wrote:
Hi Alex,
Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?
On 2 June 2017 at 11:15, George Mihaiescu wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle wrote:
1 - 100 of 158 matches
Mail list logo