Hi,
I am using the below cmd to boot cirros-0.3.2-x86_64-uec image thats
present in devstack
by default...
nova boot --flavor m1.nano --image cirros-0.3.2-x86_64-uec --key_name
mykey --security_group default myvm_nano
nova list -> shows the instance as ACTIVE/Running
Taking the VNC console, I
The below issue was resolved (thanks to akerr on IRC).
It seems called_once_with is not a function of mock and doesn't work
properly
Need to use assertTrue(mock_func.called) and thats working for me.
thanx,
deepak
On Tue, Jun 3, 2014 at 9:46 PM, Deepak Shetty wrote:
> Hi, whats the r
something needs to be refactored).
>
> LOG statements, and calls should be expected to move/be removed *often*
> so testing functionality in tests with them seems like the wrong approach.
>
> My 2 cents.
>
> From: Deepak Shetty
> Reply-To: "OpenStack Deve
Hi, whats the right way to mock the LOG variable inside the
driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
and then doing...
mock_logger.warning.assert_called_once() - which passes and is
expected to pass per my code
but
mock_logger.debug.assert_called_once() - shud f
Mitsuhiro,
Few questions that come to my mind based on your proposal
1) There is a lof of manual work needed here.. like every time the new host
added.. admin needs to do FC zoning to ensure that LU is visible by the
host. Also the method you mentioend for refreshing (echo '---' > ...)
doesn't w
I am looking for reviews of my patch so that I can close on this soon
https://review.openstack.org/#/c/86888/
Appreciate your time.
thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman
On Mon, Apr 28, 2014 at 11:38 PM, Jay Pipes wrote:
> On 04/28/2014 02:00 PM, Deepak Shetty wrote:
>
>> I was writing this in test_glusterfs.py
>>
>> def test_ensure_shares_unmounted_1share(self):
>> with contextlib.nested(
>>
I was writing this in test_glusterfs.py
def test_ensure_shares_unmounted_1share(self):
with contextlib.nested(
mock.patch.object(self._driver, '_load_shares_config'),
mock.patch.object(self._driver, '_ensure_share_unmounted')
) as (self._fake_load_share
search tox-epep8
Warning: No matches found for: tox-epep8
No matches found
[stack@devstack-vm cinder]$
On Mon, Apr 28, 2014 at 3:39 PM, Sean Dague wrote:
> On 04/28/2014 06:08 AM, Deepak Shetty wrote:
> > Hi,
> >
> > H703 Multiple positional placeholders
> >
> &g
i proceed ?
On Mon, Apr 28, 2014 at 3:39 PM, Sean Dague wrote:
> On 04/28/2014 06:08 AM, Deepak Shetty wrote:
> > Hi,
> >
> > H703 Multiple positional placeholders
> >
> > I got this for one of my patch and googling i could find that the fix is
> > to use
&g
Hi,
H703 Multiple positional placeholders
I got this for one of my patch and googling i could find that the fix is to
use
dict instead of direct substitues.. which i did.. but it still gives me the
error :(
Also just running pep8 locally on my glsuterfs.py file doesn't show any
issue
but gerrit
Hi,
Can someone help me understand why Jenkins build shows failures
for some of the tests for my patch @
https://review.openstack.org/#/c/86888/
I really couldn't understand it even after clicking those links
TIA
thanx,
deepak
___
OpenStack-dev mailin
On Fri, Apr 11, 2014 at 7:29 PM, Duncan Thomas wrote:
> On 11 April 2014 14:21, Deepak Shetty wrote:
> > My argument was mostly from the perspective that unmanage shud do its
> best
> > to revert back the volume to its original state (mainly the name).
> >
> > Li
On Tue, Apr 15, 2014 at 4:14 PM, Duncan Thomas wrote:
> On 11 April 2014 16:24, Eric Harney wrote:
>
>
> > I suppose I should also note that if the plans in this blueprint are
> > implemented the way I've had in mind, the main issue here about only
> > loading shares at startup time would be in p
On Thu, Apr 17, 2014 at 10:00 PM, Deepak Shetty wrote:
>
>
>
> On Fri, Apr 11, 2014 at 8:25 PM, Eric Harney wrote:
>
>> On 04/11/2014 07:54 AM, Deepak Shetty wrote:
>> > Hi,
>> >I am using the nfs and glusterfs driver as reference here.
>> &
On Fri, Apr 11, 2014 at 8:25 PM, Eric Harney wrote:
> On 04/11/2014 07:54 AM, Deepak Shetty wrote:
> > Hi,
> >I am using the nfs and glusterfs driver as reference here.
> >
> > I see that load_shares_config is called everytime via
> > _ensure_shares_mounted
gt; Cloud Solutions Group
> NetApp
>
>
> From: Deepak Shetty
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
> Date: Friday, April 11, 2014 at 7:54 AM
> To: "OpenStack Development Mailing List (not for usage
?
On Thu, Apr 10, 2014 at 10:50 PM, Duncan Thomas wrote:
> On 10 April 2014 09:02, Deepak Shetty wrote:
>
> > Ok, agreed. But then when admin unmanages it, we shud rename it back to
> the
> > name
> > that it originally had before it was managed by cinder. At least that
Hi,
I am using the nfs and glusterfs driver as reference here.
I see that load_shares_config is called everytime via
_ensure_shares_mounted which I feel is incorrect mainly because
ensure_shares_mounted loads the config file again w/o restarting the service
I think that the shares config file
On Wed, Apr 9, 2014 at 9:39 PM, Duncan Thomas wrote:
> On 9 April 2014 08:35, Deepak Shetty wrote:
>
> > Alternatively, does this mean we need to make name_id a generic field
> (not a
> > ID) and then use somethign like uuidutils.is_uuid_like() to determine if
> its
>
On Tue, Apr 8, 2014 at 6:24 PM, Avishay Traeger wrote:
> On Tue, Apr 8, 2014 at 9:17 AM, Deepak Shetty wrote:
>
>> Hi List,
>> I had few Qs on the implementation of manage_existing and unmanage
>> API extns
>>
>> 1) For LVM case, it renames the lv..
Hi List,
I had few Qs on the implementation of manage_existing and unmanage API
extns
1) For LVM case, it renames the lv.. isn't it better to use name_id (one
used during cinder migrate to keep id same for a diff backend name/id) to
map cinder name/id to backend name/id and thus avoid renaming
Cinder provides backup/restore API for cinder volumes. Will that be used as
part of the higher level VM import/export orchestration, or both are
totally different ? If different, how ?
On Mon, Apr 7, 2014 at 12:54 PM, Jesse Pretorius
wrote:
> On 7 April 2014 09:06, Saju M wrote:
>
>> Amazon pro
similar is missing on Cinder side and __del__ way of cleanup isn't working
as I posted above.
On Mon, Apr 7, 2014 at 10:24 AM, Deepak Shetty wrote:
> Duncan,
> Thanks for your response. Tho' i agree to what you said.. I am still
> trying to understand why i see what i see .
down cleanly (kill -9, SEGFAULT, etc), or
> something might have gone wrong during clean shutdown. The driver
> coming up should therefore not make any assumptions it doesn't
> absolutely have to, but rather should check and attempt cleanup
> itself, on startup.
>
> O
Shiva,
Can u tell what exactly u r trying to change in /opt/stack/ ?
My guess is that u might be running into stack.sh re-pulling the sources
hence overriding ur changes ? Try with OFFLINE=True in localrc (create a
localrc file in /opt/stack/ and put OFFLINE=True) and redo stack.sh
On Thu, Apr
resendign it with correct cinder prefix in subject.
thanx,
deepak
On Thu, Apr 3, 2014 at 7:44 PM, Deepak Shetty wrote:
>
> Hi,
> I am looking to umount the glsuterfs shares that are mounted as part
> of gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
&
Hi,
I am looking to umount the glsuterfs shares that are mounted as part of
gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in devstack
env) or when c-vol service is being shutdown.
I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it didn't
work
def __del__(se
might have to change for the workload to run
> at the secondary site. Applying this metadata to VMs on the secondary site
> (what needs to change in the personality), when they boot, is probably
> something Heat can do.
>
>
>
>
>
> -bruce
>
>
>
> *Fro
Hi List,
I was looking at the etherpad and March 19 notes and have few Qs
1) How is the "DR middleware" (depicted in Ron's youtube video) different
than the "replication agent" (noted in the March 19 etherpad notes). Are
they same, if not, how/why are they different ?
2) Maybe a dumb Q.. but
My apologies! I mistakenly thought that SELinux was permissive, it wasn't!
Making it permissive, rabbitmq-server gets started everytime without any
ssues now.
thanx,
deepak
>
>
>
>
>
>
>
> To: openstack-dev@lists.openstack.org
>
> Hi List,
> It been few hours and I tried everything from en
101 - 131 of 131 matches
Mail list logo