Re: [ceph-users] ceph migration

2019-02-27 Thread John Hearns
We did a similar upgrade on a test system yesterday, from mimic to nautilus.
All of the PGSstayed offlien till we issued this command:

ceph osd require-osd-release nautlius --yes-i-really-mean-it}

On Wed, 27 Feb 2019 at 12:19, Zhenshi Zhou  wrote:

> Hi,
>
> The servers have moved to the new datacenter and I got it online
> following the instruction.
>
> # ceph -s
>   cluster:
> id: 7712ab7e-3c38-44b3-96d3-4e1de9da0ff6
> health: HEALTH_OK
>
>   services:
> mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3
> mgr: ceph-mon3(active), standbys: ceph-mon1, ceph-mon2
> mds: cephfs-1/1/1 up  {0=ceph-mds=up:active}, 1 up:standby
> osd: 63 osds: 63 up, 63 in
>
>   data:
> pools:   4 pools, 640 pgs
> objects: 108.6 k objects, 379 GiB
> usage:   1.3 TiB used, 228 TiB / 229 TiB avail
> pgs: 640 active+clean
>
> Thanks guys:)
>
> Eugen Block  于2019年2月27日周三 上午2:45写道:
>
>> Hi,
>>
>> > Well, I've just reacted to all the text at the beginning of
>> >
>> http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
>> > including the title "the messy way". If the cluster is clean I see no
>> > reason for doing brain surgery on monmaps
>> > just to "save" a few minutes of redoing correctly from scratch.
>>
>> with that I would agree. Careful planning and an installation
>> following the docs should be first priority. But I would also
>> encourage users to experiment with ceph before going into production.
>> Dealing with failures and outages on a production cluster causes much
>> more headache than on a test cluster. ;-)
>>
>> If the cluster is empty anyway, I would also rather reinstall it, it
>> doesn't take that much time. I just wanted to point out that there is
>> a way that worked for me, although that was only a test cluster.
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Janne Johansson :
>>
>> > Den mån 25 feb. 2019 kl 13:40 skrev Eugen Block :
>> >> I just moved a (virtual lab) cluster to a different network, it worked
>> >> like a charm.
>> >> In an offline method - you need to:
>> >>
>> >> - set osd noout, ensure there are no OSDs up
>> >> - Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
>> >> ADDRESS", MONs are the only ones really
>> >> sticky with the IP
>> >> - Ensure ceph.conf has the new MON IPs and network IPs
>> >> - Start MONs with new monmap, then start OSDs
>> >>
>> >> > No, certain ips will be visible in the databases, and those will
>> >> not change.
>> >> I'm not sure where old IPs will be still visible, could you clarify
>> >> that, please?
>> >
>> > Well, I've just reacted to all the text at the beginning of
>> >
>> http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
>> > including the title "the messy way". If the cluster is clean I see no
>> > reason for doing brain surgery on monmaps
>> > just to "save" a few minutes of redoing correctly from scratch. What
>> > if you miss some part, some command gives you an error
>> > you really aren't comfortable with, something doesn't really feel
>> > right after doing it, then the whole lifetime of that cluster
>> > will be followed by a small nagging feeling that it might have been
>> > that time you followed a guide that tries to talk you out of
>> > doing it that way, for a cluster with no data.
>> >
>> > I think that is the wrong way to learn how to run clusters.
>> >
>> > --
>> > May the most significant bit of your life be positive.
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph migration

2019-02-27 Thread Zhenshi Zhou
Hi,

The servers have moved to the new datacenter and I got it online
following the instruction.

# ceph -s
  cluster:
id: 7712ab7e-3c38-44b3-96d3-4e1de9da0ff6
health: HEALTH_OK

  services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3
mgr: ceph-mon3(active), standbys: ceph-mon1, ceph-mon2
mds: cephfs-1/1/1 up  {0=ceph-mds=up:active}, 1 up:standby
osd: 63 osds: 63 up, 63 in

  data:
pools:   4 pools, 640 pgs
objects: 108.6 k objects, 379 GiB
usage:   1.3 TiB used, 228 TiB / 229 TiB avail
pgs: 640 active+clean

Thanks guys:)

Eugen Block  于2019年2月27日周三 上午2:45写道:

> Hi,
>
> > Well, I've just reacted to all the text at the beginning of
> >
> http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
> > including the title "the messy way". If the cluster is clean I see no
> > reason for doing brain surgery on monmaps
> > just to "save" a few minutes of redoing correctly from scratch.
>
> with that I would agree. Careful planning and an installation
> following the docs should be first priority. But I would also
> encourage users to experiment with ceph before going into production.
> Dealing with failures and outages on a production cluster causes much
> more headache than on a test cluster. ;-)
>
> If the cluster is empty anyway, I would also rather reinstall it, it
> doesn't take that much time. I just wanted to point out that there is
> a way that worked for me, although that was only a test cluster.
>
> Regards,
> Eugen
>
>
> Zitat von Janne Johansson :
>
> > Den mån 25 feb. 2019 kl 13:40 skrev Eugen Block :
> >> I just moved a (virtual lab) cluster to a different network, it worked
> >> like a charm.
> >> In an offline method - you need to:
> >>
> >> - set osd noout, ensure there are no OSDs up
> >> - Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
> >> ADDRESS", MONs are the only ones really
> >> sticky with the IP
> >> - Ensure ceph.conf has the new MON IPs and network IPs
> >> - Start MONs with new monmap, then start OSDs
> >>
> >> > No, certain ips will be visible in the databases, and those will
> >> not change.
> >> I'm not sure where old IPs will be still visible, could you clarify
> >> that, please?
> >
> > Well, I've just reacted to all the text at the beginning of
> >
> http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
> > including the title "the messy way". If the cluster is clean I see no
> > reason for doing brain surgery on monmaps
> > just to "save" a few minutes of redoing correctly from scratch. What
> > if you miss some part, some command gives you an error
> > you really aren't comfortable with, something doesn't really feel
> > right after doing it, then the whole lifetime of that cluster
> > will be followed by a small nagging feeling that it might have been
> > that time you followed a guide that tries to talk you out of
> > doing it that way, for a cluster with no data.
> >
> > I think that is the wrong way to learn how to run clusters.
> >
> > --
> > May the most significant bit of your life be positive.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph migration

2019-02-26 Thread Eugen Block

Hi,


Well, I've just reacted to all the text at the beginning of
http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
including the title "the messy way". If the cluster is clean I see no
reason for doing brain surgery on monmaps
just to "save" a few minutes of redoing correctly from scratch.


with that I would agree. Careful planning and an installation  
following the docs should be first priority. But I would also  
encourage users to experiment with ceph before going into production.  
Dealing with failures and outages on a production cluster causes much  
more headache than on a test cluster. ;-)


If the cluster is empty anyway, I would also rather reinstall it, it  
doesn't take that much time. I just wanted to point out that there is  
a way that worked for me, although that was only a test cluster.


Regards,
Eugen


Zitat von Janne Johansson :


Den mån 25 feb. 2019 kl 13:40 skrev Eugen Block :

I just moved a (virtual lab) cluster to a different network, it worked
like a charm.
In an offline method - you need to:

- set osd noout, ensure there are no OSDs up
- Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
ADDRESS", MONs are the only ones really
sticky with the IP
- Ensure ceph.conf has the new MON IPs and network IPs
- Start MONs with new monmap, then start OSDs

> No, certain ips will be visible in the databases, and those will  
not change.

I'm not sure where old IPs will be still visible, could you clarify
that, please?


Well, I've just reacted to all the text at the beginning of
http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
including the title "the messy way". If the cluster is clean I see no
reason for doing brain surgery on monmaps
just to "save" a few minutes of redoing correctly from scratch. What
if you miss some part, some command gives you an error
you really aren't comfortable with, something doesn't really feel
right after doing it, then the whole lifetime of that cluster
will be followed by a small nagging feeling that it might have been
that time you followed a guide that tries to talk you out of
doing it that way, for a cluster with no data.

I think that is the wrong way to learn how to run clusters.

--
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph migration

2019-02-25 Thread Janne Johansson
Den mån 25 feb. 2019 kl 13:40 skrev Eugen Block :
> I just moved a (virtual lab) cluster to a different network, it worked
> like a charm.
> In an offline method - you need to:
>
> - set osd noout, ensure there are no OSDs up
> - Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
> ADDRESS", MONs are the only ones really
> sticky with the IP
> - Ensure ceph.conf has the new MON IPs and network IPs
> - Start MONs with new monmap, then start OSDs
>
> > No, certain ips will be visible in the databases, and those will not change.
> I'm not sure where old IPs will be still visible, could you clarify
> that, please?

Well, I've just reacted to all the text at the beginning of
http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
including the title "the messy way". If the cluster is clean I see no
reason for doing brain surgery on monmaps
just to "save" a few minutes of redoing correctly from scratch. What
if you miss some part, some command gives you an error
you really aren't comfortable with, something doesn't really feel
right after doing it, then the whole lifetime of that cluster
will be followed by a small nagging feeling that it might have been
that time you followed a guide that tries to talk you out of
doing it that way, for a cluster with no data.

I think that is the wrong way to learn how to run clusters.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph migration

2019-02-25 Thread Zhenshi Zhou
Hi Eugen,

Thanks for the advice. That helps me a lot :)

Eugen Block  于2019年2月25日周一 下午8:22写道:

> I just moved a (virtual lab) cluster to a different network, it worked
> like a charm.
>
> In an offline method - you need to:
>
> - set osd noout, ensure there are no OSDs up
> - Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
> ADDRESS", MONs are the only ones really
> sticky with the IP
> - Ensure ceph.conf has the new MON IPs and network IPs
> - Start MONs with new monmap, then start OSDs
>
> > No, certain ips will be visible in the databases, and those will not
> change.
>
> I'm not sure where old IPs will be still visible, could you clarify
> that, please?
>
> Regards,
> Eugen
>
>
> [1] http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/
>
>
> Zitat von Janne Johansson :
>
> > Den mån 25 feb. 2019 kl 12:33 skrev Zhenshi Zhou :
> >> I deployed a new cluster(mimic). Now I have to move all servers
> >> in this cluster to another place, with new IP.
> >> I'm not sure if the cluster will run well or not after I modify config
> >> files, include /etc/hosts and /etc/ceph/ceph.conf.
> >
> > No, certain ips will be visible in the databases, and those will not
> change.
> >
> >> Fortunately, the cluster has no data at present. I never encounter
> >> such an issue like this. Is there any suggestion for me?
> >
> > If you recently created the cluster, it should be easy to just
> > recreate it again,
> > using the same scripts so you don't have to repeat yourself as an admin
> since
> > computers are very good at repetitive tasks.
> >
> > --
> > May the most significant bit of your life be positive.
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph migration

2019-02-25 Thread Eugen Block
I just moved a (virtual lab) cluster to a different network, it worked  
like a charm.


In an offline method - you need to:

- set osd noout, ensure there are no OSDs up
- Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP  
ADDRESS", MONs are the only ones really

sticky with the IP
- Ensure ceph.conf has the new MON IPs and network IPs
- Start MONs with new monmap, then start OSDs


No, certain ips will be visible in the databases, and those will not change.


I'm not sure where old IPs will be still visible, could you clarify  
that, please?


Regards,
Eugen


[1] http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/


Zitat von Janne Johansson :


Den mån 25 feb. 2019 kl 12:33 skrev Zhenshi Zhou :

I deployed a new cluster(mimic). Now I have to move all servers
in this cluster to another place, with new IP.
I'm not sure if the cluster will run well or not after I modify config
files, include /etc/hosts and /etc/ceph/ceph.conf.


No, certain ips will be visible in the databases, and those will not change.


Fortunately, the cluster has no data at present. I never encounter
such an issue like this. Is there any suggestion for me?


If you recently created the cluster, it should be easy to just
recreate it again,
using the same scripts so you don't have to repeat yourself as an admin since
computers are very good at repetitive tasks.

--
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph migration

2019-02-25 Thread Janne Johansson
Den mån 25 feb. 2019 kl 12:33 skrev Zhenshi Zhou :
> I deployed a new cluster(mimic). Now I have to move all servers
> in this cluster to another place, with new IP.
> I'm not sure if the cluster will run well or not after I modify config
> files, include /etc/hosts and /etc/ceph/ceph.conf.

No, certain ips will be visible in the databases, and those will not change.

> Fortunately, the cluster has no data at present. I never encounter
> such an issue like this. Is there any suggestion for me?

If you recently created the cluster, it should be easy to just
recreate it again,
using the same scripts so you don't have to repeat yourself as an admin since
computers are very good at repetitive tasks.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph migration to AWS

2015-05-06 Thread Saverio Proto
Why you don't use directly AWS S3 then ?

Saverio

2015-04-24 17:14 GMT+02:00 Mike Travis mike.r.tra...@gmail.com:
 To those interested in a tricky problem,

 We have a Ceph cluster running at one of our data centers. One of our
 client's requirements is to have them hosted at AWS. My question is: How do
 we effectively migrate our data on our internal Ceph cluster to an AWS Ceph
 cluster?

 Ideas currently on the table:

 1. Build OSDs at AWS and add them to our current Ceph cluster. Build quorum
 at AWS then sever the connection between AWS and our data center.

 2. Build a Ceph cluster at AWS and send snapshots from our data center to
 our AWS cluster allowing us to migrate to AWS.

 Is this a good idea? Suggestions? Has anyone done something like this
 before?

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph migration to AWS

2015-05-04 Thread Kyle Bader
 To those interested in a tricky problem,

 We have a Ceph cluster running at one of our data centers. One of our
 client's requirements is to have them hosted at AWS. My question is: How do
 we effectively migrate our data on our internal Ceph cluster to an AWS Ceph
 cluster?

 Ideas currently on the table:

 1. Build OSDs at AWS and add them to our current Ceph cluster. Build quorum
 at AWS then sever the connection between AWS and our data center.

I would highly discourage this.

 2. Build a Ceph cluster at AWS and send snapshots from our data center to
 our AWS cluster allowing us to migrate to AWS.

This sounds far more sensible. I'd look at the I2 (iops) or D2
(density) class instances, depending on use case.

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph migration to AWS

2015-05-04 Thread Christian Balzer
On Mon, 4 May 2015 11:21:12 -0700 Kyle Bader wrote:

  To those interested in a tricky problem,
 
  We have a Ceph cluster running at one of our data centers. One of our
  client's requirements is to have them hosted at AWS. My question is:
  How do we effectively migrate our data on our internal Ceph cluster to
  an AWS Ceph cluster?
 
  Ideas currently on the table:
 
  1. Build OSDs at AWS and add them to our current Ceph cluster. Build
  quorum at AWS then sever the connection between AWS and our data
  center.
 
 I would highly discourage this.
 
Indeed, lest latency eat your babies and all that.

  2. Build a Ceph cluster at AWS and send snapshots from our data center
  to our AWS cluster allowing us to migrate to AWS.
 
 This sounds far more sensible. I'd look at the I2 (iops) or D2
 (density) class instances, depending on use case.
 
It might still be a fools errand, take a look at the recent/current thread
here called long blocking with writes on rbds.

I'd make 200% sure that AWS is a platform that Ceph can be operated on
normally before going ahead with any production level projects.

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com