Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-02 Thread Woods, Ken A (DNR)
  1.

ceph osd purge {id} --yes-i-really-mean-it


  2.  Navigate to the host where you keep the master copy of the cluster’s 
ceph.conf file.

ssh {admin-host}
cd /etc/ceph
vim ceph.conf


  3.  Remove the OSD entry from your ceph.conf file (if it exists).

[osd.1]
host = {hostname}



From: Woods, Ken A (DNR)
Sent: Monday, July 2, 2018 4:48:30 PM
To: PVE User List
Subject: Re: [PVE-User] pveceph createosd after destroyed osd


You're thinking "proxmox".  Try thinking "ceph" instead.   Sure, ceph runs with 
proxmox, but what you're really doing is using a pretty GUI that sits on top of 
debian, running ceph and kvm.


Anyway, perhaps the GUI does all the steps needed?  Perhaps not.


If it were me, I'd NOT reinstall, as that's likely not going to fix the issue.


Follow the directions in the page I linked and see if that helps.


From: pve-user  on behalf of Mark Adams 

Sent: Monday, July 2, 2018 4:41:39 PM
To: PVE User List
Subject: Re: [PVE-User] pveceph createosd after destroyed osd

Hi, Thanks for your response!

No, I didn't do any of that on the cli - I just did stop in the webgui,
then out, then destroy.

Note that there was no VM's or data at all on this test ceph cluster - I
had deleted it all before doing this. I was basically just removing it all
so the OSD numbers looked "nicer" for the final setup.

It's not a huge deal, I can just reinstall proxmox. But it concerns me that
it seems so fragile using the webgui to do this. I want to know where I
went wrong? Is there somewhere that a signature is being stored so when you
try to add that same drive again (even though I ticked "remove partitions")
it doesn't add back in to the ceph cluster in the next sequential order
from the last current "live" or "valid" drive?

Is it just a rule that you never actually remove drives? you just set them
stopped/out?

Regards,
Mark



On 3 July 2018 at 01:34, Woods, Ken A (DNR)  wrote:

> http://docs.ceph.com/docs/mimic/rados/operations/add-or-
> rm-osds/#removing-osds-manual
>
> Are you sure you followed the directions?
>
> 
> From: pve-user  on behalf of Mark Adams
> 
> Sent: Monday, July 2, 2018 4:05:51 PM
> To: pve-user@pve.proxmox.com
> Subject: [PVE-User] pveceph createosd after destroyed osd
>
> Currently running the newest 5.2-1 version, I had a test cluster which was
> working fine. I since added more disks, first stopping, then setting out,
> then destroying each osd so I could recreate it all from scratch.
>
> However, when adding a new osd (either via GUI or pveceph CLI) it seems to
> show a successful create, however does not show in the gui as an osd under
> the host.
>
> It's like the osd information is being stored by proxmox/ceph somewhere
> else and not being correctly removed and recreated?
>
> I can see that the newly created disk (after it being destroyed) is
> down/out.
>
> Is this by design? is there a way to force the disk back? shouldn't it show
> in the gui once you create it again?
>
> Thanks!
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-02 Thread Woods, Ken A (DNR)
You're thinking "proxmox".  Try thinking "ceph" instead.   Sure, ceph runs with 
proxmox, but what you're really doing is using a pretty GUI that sits on top of 
debian, running ceph and kvm.


Anyway, perhaps the GUI does all the steps needed?  Perhaps not.


If it were me, I'd NOT reinstall, as that's likely not going to fix the issue.


Follow the directions in the page I linked and see if that helps.


From: pve-user  on behalf of Mark Adams 

Sent: Monday, July 2, 2018 4:41:39 PM
To: PVE User List
Subject: Re: [PVE-User] pveceph createosd after destroyed osd

Hi, Thanks for your response!

No, I didn't do any of that on the cli - I just did stop in the webgui,
then out, then destroy.

Note that there was no VM's or data at all on this test ceph cluster - I
had deleted it all before doing this. I was basically just removing it all
so the OSD numbers looked "nicer" for the final setup.

It's not a huge deal, I can just reinstall proxmox. But it concerns me that
it seems so fragile using the webgui to do this. I want to know where I
went wrong? Is there somewhere that a signature is being stored so when you
try to add that same drive again (even though I ticked "remove partitions")
it doesn't add back in to the ceph cluster in the next sequential order
from the last current "live" or "valid" drive?

Is it just a rule that you never actually remove drives? you just set them
stopped/out?

Regards,
Mark



On 3 July 2018 at 01:34, Woods, Ken A (DNR)  wrote:

> http://docs.ceph.com/docs/mimic/rados/operations/add-or-
> rm-osds/#removing-osds-manual
>
> Are you sure you followed the directions?
>
> 
> From: pve-user  on behalf of Mark Adams
> 
> Sent: Monday, July 2, 2018 4:05:51 PM
> To: pve-user@pve.proxmox.com
> Subject: [PVE-User] pveceph createosd after destroyed osd
>
> Currently running the newest 5.2-1 version, I had a test cluster which was
> working fine. I since added more disks, first stopping, then setting out,
> then destroying each osd so I could recreate it all from scratch.
>
> However, when adding a new osd (either via GUI or pveceph CLI) it seems to
> show a successful create, however does not show in the gui as an osd under
> the host.
>
> It's like the osd information is being stored by proxmox/ceph somewhere
> else and not being correctly removed and recreated?
>
> I can see that the newly created disk (after it being destroyed) is
> down/out.
>
> Is this by design? is there a way to force the disk back? shouldn't it show
> in the gui once you create it again?
>
> Thanks!
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-02 Thread Mark Adams
Hi, Thanks for your response!

No, I didn't do any of that on the cli - I just did stop in the webgui,
then out, then destroy.

Note that there was no VM's or data at all on this test ceph cluster - I
had deleted it all before doing this. I was basically just removing it all
so the OSD numbers looked "nicer" for the final setup.

It's not a huge deal, I can just reinstall proxmox. But it concerns me that
it seems so fragile using the webgui to do this. I want to know where I
went wrong? Is there somewhere that a signature is being stored so when you
try to add that same drive again (even though I ticked "remove partitions")
it doesn't add back in to the ceph cluster in the next sequential order
from the last current "live" or "valid" drive?

Is it just a rule that you never actually remove drives? you just set them
stopped/out?

Regards,
Mark



On 3 July 2018 at 01:34, Woods, Ken A (DNR)  wrote:

> http://docs.ceph.com/docs/mimic/rados/operations/add-or-
> rm-osds/#removing-osds-manual
>
> Are you sure you followed the directions?
>
> 
> From: pve-user  on behalf of Mark Adams
> 
> Sent: Monday, July 2, 2018 4:05:51 PM
> To: pve-user@pve.proxmox.com
> Subject: [PVE-User] pveceph createosd after destroyed osd
>
> Currently running the newest 5.2-1 version, I had a test cluster which was
> working fine. I since added more disks, first stopping, then setting out,
> then destroying each osd so I could recreate it all from scratch.
>
> However, when adding a new osd (either via GUI or pveceph CLI) it seems to
> show a successful create, however does not show in the gui as an osd under
> the host.
>
> It's like the osd information is being stored by proxmox/ceph somewhere
> else and not being correctly removed and recreated?
>
> I can see that the newly created disk (after it being destroyed) is
> down/out.
>
> Is this by design? is there a way to force the disk back? shouldn't it show
> in the gui once you create it again?
>
> Thanks!
>
>
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pveceph createosd after destroyed osd

2018-07-02 Thread Woods, Ken A (DNR)
http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#removing-osds-manual

Are you sure you followed the directions?


From: pve-user  on behalf of Mark Adams 

Sent: Monday, July 2, 2018 4:05:51 PM
To: pve-user@pve.proxmox.com
Subject: [PVE-User] pveceph createosd after destroyed osd

Currently running the newest 5.2-1 version, I had a test cluster which was
working fine. I since added more disks, first stopping, then setting out,
then destroying each osd so I could recreate it all from scratch.

However, when adding a new osd (either via GUI or pveceph CLI) it seems to
show a successful create, however does not show in the gui as an osd under
the host.

It's like the osd information is being stored by proxmox/ceph somewhere
else and not being correctly removed and recreated?

I can see that the newly created disk (after it being destroyed) is
down/out.

Is this by design? is there a way to force the disk back? shouldn't it show
in the gui once you create it again?

Thanks!
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pveceph createosd after destroyed osd

2018-07-02 Thread Mark Adams
Currently running the newest 5.2-1 version, I had a test cluster which was
working fine. I since added more disks, first stopping, then setting out,
then destroying each osd so I could recreate it all from scratch.

However, when adding a new osd (either via GUI or pveceph CLI) it seems to
show a successful create, however does not show in the gui as an osd under
the host.

It's like the osd information is being stored by proxmox/ceph somewhere
else and not being correctly removed and recreated?

I can see that the newly created disk (after it being destroyed) is
down/out.

Is this by design? is there a way to force the disk back? shouldn't it show
in the gui once you create it again?

Thanks!
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] which service is responsible for mounting the NFS storages during the startup of the Proxmox?

2018-07-02 Thread Thomas Lamprecht
On 7/2/18 4:16 AM, Vinicius Barreto wrote:
> Hello,
> please, would anyone know to tell me which service is responsible for
> mounting the NFS storages during the startup of the Proxmox?
> Note: Added by GUI or Pvesm.
> 

We have a logic that we activate volumes of what we need.
E.g., when a VM is started then we activate it's volume which
in turn mounts the underlying storage, if not already mounted.

pvestatd checks on all configured enabled storages and also mount's
them to be able to get it's status/usage information.

> I ask because there is another service that I added to the server and it
> depends on the NFS drives being mounted at startup.
> So I plan to configure my service to start after this service that mounts
> the NFS drives during the proxmox boot.
> 

IMO you should do an after/require on pve-storage.targetd and
pvestatd.service - you then could eventually wait heuristically a bit
to ensure that all storages are mounted.

Or you use PVE libraries to activate all storages (i.e., mount them)

it could look like:


#!/usr/bin/perl

use strict;
use warnings;

use PVE::Storage;
use PVE::Cluster;

PVE::Cluster::cfs_update();

my $storage_cfg = PVE::Storage::config();
my $all_sids = [ keys %{$storage_cfg->{ids}} ];

PVE::Storage::activate_storage_list($storage_cfg, $all_sids);


This could be done a pre start hook.

cheers,
Thomas


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user