#x27;2013.1' is the official release version and it reflects that it
is first release made during year 2013.
Hope this helps and if I've mistaken, someone will correct me.
--
Best regards,
Oleg Gelbukh
Sr. IT Engineer
Mirantis, Inc.
On Wed, Mar 27, 2013 at 8:07 AM, heckj wrote:
>
Hello, Ning,
On the second question: Keystone 'tenant' maps to 'account' in Swift.
Keystone 'user' directly corresponds to Swift 'user'.
--
Best regards,
Oleg Gelbukh
Mirantis IT
On Tue, Nov 13, 2012 at 12:19 PM, ning2008wisc wrote:
> Thanks Alex!
&g
gestions for replication
optimizations and possibly get some feedback about our approach.
--
Best regards,
Oleg Gelbukh
Mirantis, Inc.
On Thu, Nov 1, 2012 at 7:55 PM, John Dickinson wrote:
> This is already supported in Swift with the concept of availability zones.
> Swift will place each rep
Tim,
It's possible that SAN appliance used to provide storage to VMs under
Cinder management will need to directly plug some logical port into tenant
network. In this case, it seems that it should be Quantum actually
performing plug, probably through some specialized agent.
--
Best regards,
Oleg
Hello,
Is it possible that, during snapshotting, libvirt just tears down virtual
interface at some point, and then re-creates it, with hairpin_mode disabled
again?
This bugfix [https://bugs.launchpad.net/nova/+bug/933640] implies that fix
works on spawn of instance. This means that upon resume aft
in glance-api.conf
files on all compute nodes.
--
Best regards,
Oleg Gelbukh
Mirantis Inc
On Thu, Aug 16, 2012 at 10:42 PM, Lars Kellogg-Stedman <
l...@seas.harvard.edu> wrote:
> Assuming some sort of shared filesystem, can I run multiple glance
> indexes in order to distribute the i/
Eugene,
I suggest just add option 'rabbit_servers' that will override
'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
my understanding.
--
Best regards,
Oleg Gelbukh
Mirantis, Inc.
On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov
Gabriel,
There is a folsom-targeted blueprint for #2 at least:
https://blueprints.launchpad.net/nova/+spec/auto-create-boot-volumes
--
Oleg
On Sun, May 27, 2012 at 10:51 PM, Gabriel Hurley
wrote:
> To the best of my understanding there are two parts to this, neither of
> which is fully where i
Gentlemen,
We have a feature for swift3 middleware that we'd like to propose for
merge. How we can do this now, when it is split into associated project?
How has the procedure changed?
--
Best reagards,
Oleg Gelbukh
Mirantis Inc.
On Mon, May 21, 2012 at 8:01 PM, Chmouel Boudjnah wrote:
&g
Hello,
You need to do exactly what is written in last paragraph of your mail: use
-k with curl to turn off certificate verification.
--
Best regards,
Oleg
On Thu, May 3, 2012 at 1:46 PM, khabou imen wrote:
> hi everybody ,
> when trying to upload images using keystone for authentification I go
Hello, Eric
Swift is actually an object store rather then volume store. It is used for
storing any types of objects as files in underlying file system. This files
can be anything, including binary images of block volumes. HTTP is used for
transporting objects to and from the store.
Nova-volume ser
Hello, Julien
Thanks for insightful numbers!
It really seems that having 1 node per zone for Swift cluster is
inefficient,
and it may be better to have more not so CPU-packed storage nodes with less
devices per node then few high-performance nodes with disk shelves.
--
Best regards,
Oleg
On Wed
Alex,
Thank you for important point and interesting information on large-scale
Swift performance!
Can you please explain a little what these times stand for? Is this a
single process runtime, or the time needed to converge cluster in case of
device failure, or something else?
--
Best regards,
Ol
Hello,
We are interested in participating. Looking forward to talk to all Nova
block storage developers.
--
Best regards,
Oleg
On Tue, Feb 14, 2012 at 2:31 AM, John Griffith
wrote:
> Hi Bob,
> Just pop into IRC: #openstack-meeting
>
> John
>
> On Mon, Feb 13, 2012 at 3:17 PM, Bob Van Zant wrot
Jesse,
Thank you for quick answer and interesting information. Personally I like
the idea of multiple projects as ecosystem around OpenStack core.
--
Best regards,
Oleg Gelbukh
Mirantis Inc.
On Mon, Feb 6, 2012 at 10:13 PM, Jesse Andrews wrote:
> Hi Oleg,
>
> NOTE: this is my opini
to the
projects incubator, or there are some other lbaas projects out there?
Thanks in advance,
--
Oleg Gelbukh
Mirantis Inc.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https
k-end. NexentaStor is based on
OpenSolaris and uses ZFS file system for storage. Volumes are exported via
iSCSI.
--
Best regards,
Oleg Gelbukh
Sr. Engineer
Mirantis Inc.
___
Mailing list: https://launchpad.net/~openstack
Post to : open
ck storage
service, or it was decided to focus on integrated service first and then
expand it into dedicated project? Or does the Lunr development continue
somewhere?
--
Best regards,
Oleg Gelbukh
Sr. Engineer
Mirantis Inc.
___
Mailing list: https://lau
/O system
than that based on disk image file.
--
Oleg Gelbukh,
Sr. IT Engineer
Mirantis Inc.
On Fri, Dec 2, 2011 at 4:17 PM, Sandy Walsh wrote:
> I've recently had inquiries about High Performance Computing (HPC) on
> Openstack. As opposed to the Service Provider (SP) model, HPC is inte
Hello everyone,
A bit of
follow-up<http://mirantis.blogspot.com/2011/06/clustered-lvm-on-drbd-resource-in.html>on
the subject, concerning clustered locking for the LVM on DRBD
resource.
On Mon, May 30, 2011 at 11:04 AM, Oleg Gelbukh wrote:
> The current OpenStack paradigm seems to
Diego Parrilla
> >CEO
> >[1]www.stackops.com |� [2]diego.parri...@stackops.com | +34 649 94 43
> 29 |
> >skype:diegoparrilla
> >
> >On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh <[3]
> ogelb...@mirantis.com>
> >wrote:
>
nterested I will be
> happy to be in a call and explain what we are doing.
>
> Nelson Nahum
> CTO
> nel...@zadarastorage.com
>
>
>
> 2011/5/27 Oleg Gelbukh
>
>> Hi
>> Our approach was defined by need to combine storage and compute on the
>> same host
| +34 649 94 43 29 |
> skype:diegoparrilla*
> * <http://www.stackops.com>
>
>
>
> On Thu, May 26, 2011 at 1:29 PM, Oleg Gelbukh wrote:
>
>> Hi,
>> We were researching Openstack for our private cloud, and want to share
>> experience and get tips from com
Hi,
We were researching Openstack for our private cloud, and want to share
experience and get tips from community as we go on.
We have settled on DRBD as shared storage platform for our installation. LVM
is used over the drbd device to mange logical volumes. OCFS2 file system is
created on one of
24 matches
Mail list logo