> Op 13 juni 2016 om 16:07 schreef George Shuklin :
>
>
> Hello.
>
> How objects are handled in the rbd? If user writes 16k in the RBD image
> with 4Mb object size, how much would be written in the OSD? 16k x
> replication or 4Mb x replication (+journals for both
Hello.
How objects are handled in the rbd? If user writes 16k in the RBD image
with 4Mb object size, how much would be written in the OSD? 16k x
replication or 4Mb x replication (+journals for both cases)?
Thanks.
___
ceph-users mailing list
The notice about image format 1 being deprecated was somewhat hidden in the
release notes. Displaying that message when opening an existing format 1
image is overkill and should be removed (at least until we come up with
some sort of online migration tool in a future Ceph release).
[1]
One more thing:
I haven't seen anything regarding the following message:
# rbd lock list 25091
2016-04-22 19:39:31.523542 7fd199d57700 -1 librbd::image::OpenRequest: RBD
image format 1 is deprecated. Please copy this image to image format 2.
Is it something that i should worry ?
---
Diego
yeah, i followed the release notes.
Everything is working, just hit this issue until enabled services
individually.
Tks
---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade
2016-04-22 12:24 GMT-03:00 Vasu Kulkarni :
> Hope you followed the release
Hope you followed the release notes and are on 0.94.4 or above
http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer
1) upgrade ( ensure you dont have user 'ceph' before)
2) stop the service
/etc/init.d/ceph stop (since you are on centos/hammer)
3) change ownership
Hello, i've upgraded my hammer cluster with the following steps:
Running CentOS 7.1
upgrade ceph-deploy
ceph-deploy install --release hammer mon{0..2}
After that i couldn't start the mon service,
systemctl start ceph.target (no errors at all, just don't get the daemon
running).
I managed to
On Fri, 25 Mar 2016 14:14:37 -0700 Bob R wrote:
> Mike,
>
> Recovery would be based on placement groups and those degraded groups
> would only exist on the storage pool(s) rather than the cache tier in
> this scenario.
>
Precisely.
They are entirely different entities.
There may be partially
Mike,
Recovery would be based on placement groups and those degraded groups would
only exist on the storage pool(s) rather than the cache tier in this
scenario.
Bob
On Fri, Mar 25, 2016 at 8:30 AM, Mike Miller
wrote:
> Hi,
>
> in case of a failure in the storage tier,
Hi,
in case of a failure in the storage tier, say single OSD disk failure or
complete system failure with several OSD disks, will the remaining cache
tier (on other nodes) be used for rapid backfilling/recovering first
until it is full? Or is backfill/recovery done directly to the storage
om>; ceph-users
<ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Question: replacing all OSDs of one node in 3node
cluster
As far as i know you can do it in two ways: (assuming you hace a pool size of 3
on all 3 nodes with min_size 2 to still have access to data)
1. Set noou
Hi Ceph users
This is my first post on this mailing list. Hope it's the correct one. Please
redirect me to the right place in case it is not.
I am running a small (3 nodes with 3 OSD and 1 monitor on each of them) Ceph
cluster.
Guess what, it is used as Cinder/Glance/Nova RDB storage for
Grüezi Daniel,
my first question would be: Whats your pool size / min_size?
ceph osd pool get pool-name
It is probably 3 (default size). If you want to have healthy state
again with only 2 nodes (all the OSDs on node 3 are down), you have to
set your pool size to 2:
ceph osd pool set pool-name
Hi Daniel,
oops, wrong copy paste, here are the correct commands:
ceph osd pool get pool-name size
ceph osd pool set pool-name size 2
On Wed, Feb 10, 2016 at 6:27 PM, Ivan Grcic wrote:
> Grüezi Daniel,
>
> my first question would be: Whats your pool size / min_size?
>
> ceph
As far as i know you can do it in two ways: (assuming you hace a pool size
of 3 on all 3 nodes with min_size 2 to still have access to data)
1. Set noout for not starting the rebalance of the cluster. Reinstall OS on
the faulty node and redeploy the node with all keys and conf files (either
Hi all,
I'm having some issues while trying to run the osd activate command with
ceph-deploy tool (1.5.28), the osd prepare command run fine, but then...
osd: sdf1
journal: /dev/sdc1
$ ceph-deploy osd activate cibn01:sdf1:/dev/sdc1
[ceph_deploy.conf][DEBUG ] found configuration file at:
HI, Experts and Supporters
I am newer for CEPH so maybe the the question looks simple and stupids sometimes
i want to create the Ceph Cluster with the algorithm of Reed solomn Raid 6,
Jerasure has the plugin "reed_sol_r6_op"
but it seems i can't bind the pool with the OSDs
the steps:
> Thanks for your reply, why not rebuild object-map when object-map feature is
> enabled.
>
> Cheers,
> xinxin
>
My initial motivation was to avoid a potentially lengthy rebuild when enabling
the feature. Perhaps that option could warn you to rebuild the object map
after its been enabled.
> Hi Jason dillaman
> Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when
> I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID
> flag.
> When I create a rbd with “—image-features = 13 ” , we enable object-map
> feature without setting
Thanks for your reply, why not rebuild object-map when object-map feature is
enabled.
Cheers,
xinxin
-Original Message-
From: Jason Dillaman [mailto:dilla...@redhat.com]
Sent: Tuesday, October 27, 2015 9:20 PM
To: Shu, Xinxin
Cc: ceph-users
Subject: Re: Question about rbd
Hello,
There are of course a number of threads in the ML archives about things
like this.
On Sat, 24 Oct 2015 17:48:35 +0200 Mike Miller wrote:
> Hi,
>
> as I am planning to set up a ceph cluster with 6 OSD nodes with 10
> harddisks in each node, could you please give me some advice about
>
Hi,
as I am planning to set up a ceph cluster with 6 OSD nodes with 10
harddisks in each node, could you please give me some advice about
hardware selection? CPU? RAM?
I am planning a 10 GBit/s public and a separate 10 GBit/s private network.
For a smaller test cluster with 5 OSD nodes and 4
The move journal, partition resize, grow file system approach would
work nicely if the spare capacity were at the end of the disk.
Unfortunately, the gdisk (0.8.1) end of disk location bug caused the
journal placement to be at the 800GB mark, leaving the largest remaining
partition at the end of
So I just realized I had described the partition error incorrectly in my
initial post. The journal was placed at the 800GB mark leaving the 2TB data
partition at the end of the disk. (See my follow-up to Lionel for details.)
I'm working to correct that so I have a single large partition the
Hello,
On Wed, 16 Sep 2015 07:21:26 -0500 John-Paul Robinson wrote:
> The move journal, partition resize, grow file system approach would
> work nicely if the spare capacity were at the end of the disk.
>
That shouldn't matter, you can "safely" loose your journal in controlled
circumstances.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
My understanding of growing file systems is the same as yours, it can
only grow at the end not the beginning. In addition to that, having
partition 2 before partition 1 just cries to me to have it fixed, but
that is just aesthetic.
Because the
Christian,
Thanks for the feedback.
I guess I'm wondering about step 4 "clobber partition, leaving data in
tact and grow partition and the file system as needed".
My understanding of xfs_growfs is that the free space must be at the end
of the existing file system. In this case the existing
Le 16/09/2015 01:21, John-Paul Robinson a écrit :
> Hi,
>
> I'm working to correct a partitioning error from when our cluster was
> first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB
> partitions for our OSDs, instead of the 2.8TB actually available on
> disk, a 29% space hit.
Hi,
I'm working to correct a partitioning error from when our cluster was
first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB
partitions for our OSDs, instead of the 2.8TB actually available on
disk, a 29% space hit. (The error was due to a gdisk bug that
mis-computed the end of
-
From: "Goncalo Borges" <gonc...@physics.usyd.edu.au>
To: "Shinobu Kinjo" <ski...@redhat.com>, "John Spray" <jsp...@redhat.com>
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, September 15, 2015 12:39:57 PM
Subject: Re: [ceph-users] Question on c
Hello John...
Thank you for the replies. I do have some comments in line.
Bare a bit with me while I give you a bit of context. Questions will appear
at the end.
1) I am currently running ceph 9.0.3 and I have install it to test the
cephfs recovery tools.
2) I've created a situation where
> In your procedure, the umount problems have nothing to do with
> corruption. It's (sometimes) hanging because the MDS is offline. If
How did you notice that the MDS was offline?
It's just because ceph client could not unmount filesystem, or anything?
I would like to see logs in mds and osd.
On Wed, Sep 9, 2015 at 2:31 AM, Goncalo Borges
wrote:
> Dear Ceph / CephFS gurus...
>
> Bare a bit with me while I give you a bit of context. Questions will appear
> at the end.
>
> 1) I am currently running ceph 9.0.3 and I have install it to test the
> cephfs
>> Finally the questions:
>>
>> a./ Under a situation as the one describe above, how can we safely terminate
>> cephfs in the clients? I have had situations where umount simply hangs and
>> there is no real way to unblock the situation unless I reboot the client. If
>> we have hundreds of clients,
On Thu, Sep 10, 2015 at 7:44 AM, Shinobu Kinjo wrote:
>>> Finally the questions:
>>>
>>> a./ Under a situation as the one describe above, how can we safely terminate
>>> cephfs in the clients? I have had situations where umount simply hangs and
>>> there is no real way to
gt;
To: "Goncalo Borges" <gonc...@physics.usyd.edu.au>
Cc: ceph-users@lists.ceph.com
Sent: Thursday, September 10, 2015 8:49:46 PM
Subject: Re: [ceph-users] Question on cephfs recovery tools
On Wed, Sep 9, 2015 at 2:31 AM, Goncalo Borges
<gonc...@physics.usyd.edu.au> wrote:
Did you unmount filesystem using?
umount -l
Shinobu
On Wed, Sep 9, 2015 at 4:31 PM, Goncalo Borges
wrote:
> Dear Ceph / CephFS gurus...
>
> Bare a bit with me while I give you a bit of context. Questions will
> appear at the end.
>
> 1) I am currently running
@lists.ceph.com>
Sent: Wednesday, September 9, 2015 5:28:38 PM
Subject: Re: [ceph-users] Question on cephfs recovery tools
Did you try to identify what kind of processes were accessing filesystem using
fuser or lsof and then kill them?
If not, you had to do that first.
Shinobu
- Ori
tember 9, 2015 5:04:23 PM
Subject: Re: [ceph-users] Question on cephfs recovery tools
Hi Shinobu
> Did you unmount filesystem using?
>
> umount -l
Yes!
Goncalo
>
> Shinobu
>
> On Wed, Sep 9, 2015 at 4:31 PM, Goncalo Borges
> <gonc...@physics.usyd.edu.au
Dear Ceph / CephFS gurus...
Bare a bit with me while I give you a bit of context. Questions will
appear at the end.
1) I am currently running ceph 9.0.3 and I have install it to test the
cephfs recovery tools.
2) I've created a situation where I've deliberately (on purpose) lost
some
t: Wednesday, September 9, 2015 5:28:38 PM
Subject: Re: [ceph-users] Question on cephfs recovery tools
Did you try to identify what kind of processes were accessing
filesystem using fuser or lsof and then kill them?
If not, you had to do that first.
Shinobu
- Original Message -
From:
ephfs-data-scan scan_extents cephfs_dt
>> # cephfs-data-scan scan_inodes cephfs_dt
>>
>> # cephfs-data-scan scan_extents --force-pool cephfs_mt
>> (doesn't seem to work)
>>
>> e./ After running the cephfs tools, everything seemed exactly in
>
just a way to prepare myself to DC cases
which I am certain they will exist, at some point
Cheers
Goncalo
Shinobu
- Original Message -
From: gonc...@physics.usyd.edu.au
To: "Shinobu Kinjo" <ski...@redhat.com>
Cc: "ceph-users" <ceph-users@lists.ceph.com>
S
Maybe it's just a precision problem?
I calculate the durability from PL(*) columns with the formula:
1-PL(site)-PL(copy)-PL(NRE).
Result:
2-cp is 0.99896562
3-cp is 0.99900049
Both of them are approximates to 99.9%
Actually the model result is 99.900%. Maybe the author wants us to ignore
the
I haven't looked at the internals of the model, but the PL(site)
you've pointed out is definitely the crux of the issue here. In the
first grouping, it's just looking at the probability of data loss due
to failing disks, and as the copies increase that goes down. In the
second grouping, it's
Hi friends:
I put a file(ceph_0.94.2-1.tar.gz, size 23812K) to ceph:
ceph@node110:~$ s3cmd ls
2015-08-27 06:13 s3://bkt-key
ceph@node110:~$ s3cmd put ceph_0.94.2-1.tar.gz s3://bkt-key
WARNING: Module python-magic is not available. Guessing MIME types based on
file extensions.
Hi friends:
I put a file(ceph_0.94.2-1.tar.gz, size 23812K) to ceph:
ceph@node110:~$ s3cmd ls
2015-08-27 06:13 s3://bkt-key
ceph@node110:~$ s3cmd put ceph_0.94.2-1.tar.gz s3://bkt-key
WARNING: Module python-magic is not available. Guessing MIME types based on
file extensions.
On Thu, Aug 27, 2015 at 2:54 AM, Goncalo Borges
gonc...@physics.usyd.edu.au wrote:
Hey guys...
1./ I have a simple question regarding the appearance of degraded PGs.
First, for reference:
a. I am working with 0.94.2
b. I have 32 OSDs distributed in 4 servers, meaning that I have 8 OSD per
Hey Greg...
Thanks for the reply.
At this point the cluster recovered, so I am no longer in that
situation. I'll try to go back, reproduce and post the pg query for one
of those degraded PGs later on.
Cheers
Goncalo
On 08/27/2015 10:02 PM, Gregory Farnum wrote:
On Thu, Aug 27, 2015 at
Hey guys...
1./ I have a simple question regarding the appearance of degraded PGs.
First, for reference:
a. I am working with 0.94.2
b. I have 32 OSDs distributed in 4 servers, meaning that I have 8
OSD per server.
c. Our cluster is set with 'osd pool default size = 3' and 'osd
Hi,
I have crosspost this issue here and in github,
but no response yet.
Any advice?
On Mon, Aug 10, 2015 at 10:21 AM, dahan dahan...@gmail.com wrote:
Hi all, I have tried the reliability model:
https://github.com/ceph/ceph-tools/tree/master/models/reliability
I run the tool with default
Hi,
Maybe this seems like a strange question but i could not find this info in the
docs , i have following question,
For the ceph cluster you need osd daemons and monitor daemons,
On a host you can run several osd daemons (best one per drive as read in the
docs) on one host
But now my
yes. The issue is resource sharing as usual: the MONs will use disk I/O,
memory and CPU. If the cluster is small (test?) then there's no problem in
using the same disks. If the cluster starts to get bigger you may want to
dedicate resources (e.g. the disk for the MONs isn't used by an OSD). If
the
Hi Mika,
Feature request created:
https://bugzilla.redhat.com/show_bug.cgi?id=1240888
On Mon, Jul 6, 2015 at 4:21 PM, Vickie ch mika.leaf...@gmail.com wrote:
Dear Cephers,
When a bucket created, the default quota setting is unlimited. Is
there any setting can change this? That's admin
Dear Cephers,
When a bucket created, the default quota setting is unlimited. Is
there any setting can change this? That's admin no need to change bucket
quota one by one.
Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello,
When storing large, multipart objects in the Ceph Object Store (~100 GB and
more), we have noticed that HEAD calls against the rados gateway for these
objects are excessively slow - in fact, they are about the same as doing a GET
on the object. Looking at the logs while this is
:08 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] question about OSD failure detection
Hi, all,
I am new to study Ceph. Trying to understand how it works and designs.
One basic question for me is about how Ceph OSD perform failure detection. I
did some searching but cannot get satisfied
.
Xiaoxi
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Liu,
Ming (HPIT-GADSC)
Sent: Monday, April 13, 2015 12:08 PM
To: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: [ceph-users] question about OSD failure detection
Hi, all,
I am
Hi, all,
I am new to study Ceph. Trying to understand how it works and designs.
One basic question for me is about how Ceph OSD perform failure detection. I
did some searching but cannot get satisfied answer, so try to ask here and hope
someone can kindly help me.
The document said, OSD will
Thank you Craig thats the Kind of answer I was expecting for :) actually we
have had an outage and 1 server never came up but was because of human error
and 1 osd from order node were down so we Couldnt even access to the filesystem
but At last I saw that the 90% of the nodes didnt crush or
Hi!
We have experienced several blackouts on our small ceph cluster.
Most annoying problem is time desync just after a blackout: mons are not
starting to work before time sync, after resync and manual restart of monitors,
some of pgs can stuck in inactive or peering state for a significant
I'm not a CephFS user, but I have had a few cluster outages.
Each OSD has a journal, and Ceph ensures that a write is in all of the
journals (primary and replicas) before it acknowledges the write. If an
OSD process crashes, it replays the journal on startup, and recovers the
write.
I've lost
Hi everyone, I am ready to launch ceph on production but there is one thing
that keeps on my mind... If there was a Blackout where all the ceph nodes went
off what would really happen with the filesystem? It would get corrupt? Or
ceph has any Kind of mechanism to survive to something like
Hello,
Is there some way to make the client(via RADOS API or something like
that) to get the notification of an event (for example, an OSD down)
happened in the cluster?
--
Den
___
ceph-users mailing list
ceph-users@lists.ceph.com
25, 2015 7:12:01 PM
Subject: [ceph-users] Question regarding rbd cache
Hi folks,
I am curious about how RBD cache works, whether it caches and writes back
entire objects. For example, if my VM images are stored with order 23 (8MB
blocks), would a 64MB rbd cache only be able to cache 8 objects
Hi all,
In my reading on the net about various implementations of Ceph, I came
across this website blog page (really doesn't give a lot of good
information but caused me to wonder):
http://avengermojo.blogspot.com/2014/12/cubieboard-cluster-ceph-test.html
near the bottom, the person did a rados
?Development
Objet : [ceph-users] question about rgw create bucket
when I create bucket, why rgw create 2 objects in the domain root pool.
and one object store struct RGWBucketInfo and the other object store struct
RGWBucketEntryPoint
and when I delete the bucket , why rgw only delete one object
when I create bucket, why rgw create 2 objects in the domain root pool.
and one object store struct RGWBucketInfo and the other object store struct
RGWBucketEntryPoint
and when I delete the bucket , why rgw only delete one object.
Hi folks,
I am curious about how RBD cache works, whether it caches and writes back
entire objects. For example, if my VM images are stored with order 23 (8MB
blocks), would a 64MB rbd cache only be able to cache 8 objects at a time?
Or does it work at a more granular fashion? Also, when a
Hello,
What exactly does the parameter 'bool exclusive' mean in the int
librados::IoCtxImpl::create(const object_t oid, bool exclusive)?
I can't find any doc to describe this :-(
--
Den
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 2015-02-13 17:54, Dennis Chen wrote:
Hello,
What exactly does the parameter 'bool exclusive' mean in the int
librados::IoCtxImpl::create(const object_t oid, bool exclusive)?
I can't find any doc to describe this :-(
From documentation in librados.h: it should be set to either either
Hello,
I write a ceph client using rados lib to execute a funcution upon the object.
CLIENT SIDE CODE
===
int main()
{
...
strcpy(in, from client);
err = rados_exec(io, objname, devctl, devctl_op, in,
strlen(in), out, 128);
if (err 0) {
fprintf(stderr,
I take back the question, because I just found that for a succeed
write opetion in the class, *no* data in the out buffer...
On Wed, Feb 4, 2015 at 5:44 PM, Dennis Chen kernel.org@gmail.com wrote:
Hello,
I write a ceph client using rados lib to execute a funcution upon the object.
CLIENT
Hello Sudarshan,
Thanks, it should be useful when I want to appoint the specific OSD as
primary ;-)
On Mon, Feb 2, 2015 at 3:50 PM, Sudarshan Pathak sushan@gmail.com wrote:
Hello Dennis,
You can create CRUSH rule to select one of osd as primary as:
rule ssd-primary {
03, 2015 11:30 AM
To: ceph-users
Subject: [ceph-users] Question about CRUSH rule set parameter min_size
max_size
Hi ,
CRUSH map have two parameter are min_size and max_size.
Explanation about min_size is If a pool makes fewer replicas than this number,
CRUSH will NOT select this rule
Hi ,
CRUSH map have two parameter are min_size and max_size.
Explanation about min_size is *If a pool makes fewer replicas than this
number, CRUSH will NOT select this rule*.
The max_size is *If a pool makes more replicas than this number, CRUSH
will NOT select this rule*
Default setting of
Hello,
If I write 2 different objects, eg, john and paul respectively to
a same pool like testpool in the cluster, is the primary OSD
calculated by CRUSH for the 2 objects the same?
--
Den
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi
You can verify the exact mapping using the following command: ceph osd map
{poolname} {objectname}
Check page http://docs.ceph.com/docs/master/man/8/ceph for the ceph command.
Cheers
JC
While moving. Excuse unintended typos.
On Feb 1, 2015, at 08:04, Loic Dachary l...@dachary.org wrote:
On 01/02/2015 14:47, Dennis Chen wrote:
Hello,
If I write 2 different objects, eg, john and paul respectively to
a same pool like testpool in the cluster, is the primary OSD
calculated by CRUSH for the 2 objects the same?
Hi,
CRUSH is likely to place john on an OSD and paul on another
On Mon, Feb 2, 2015 at 12:04 AM, Loic Dachary l...@dachary.org wrote:
On 01/02/2015 14:47, Dennis Chen wrote:
Hello,
If I write 2 different objects, eg, john and paul respectively to
a same pool like testpool in the cluster, is the primary OSD
calculated by CRUSH for the 2 objects the
Thanks, I've have the answer with the 'ceph osd map ...' command
On Mon, Feb 2, 2015 at 12:50 AM, Jean-Charles Lopez jelo...@redhat.com wrote:
Hi
You can verify the exact mapping using the following command: ceph osd map
{poolname} {objectname}
Check page
Hello Dennis,
You can create CRUSH rule to select one of osd as primary as:
rule ssd-primary {
ruleset 5
type replicated
min_size 5
max_size 10
step take ssd
step chooseleaf firstn 1 type host
step
BTW, you can make crush to always choose the same OSD as primary.
Regards,
Sudarshan
On Mon, Feb 2, 2015 at 9:26 AM, Dennis Chen kernel.org@gmail.com
wrote:
Thanks, I've have the answer with the 'ceph osd map ...' command
On Mon, Feb 2, 2015 at 12:50 AM, Jean-Charles Lopez
Hi Sudarshan,
Some hints for doing that ?
On Mon, Feb 2, 2015 at 1:03 PM, Sudarshan Pathak sushan@gmail.com wrote:
BTW, you can make crush to always choose the same OSD as primary.
Regards,
Sudarshan
On Mon, Feb 2, 2015 at 9:26 AM, Dennis Chen kernel.org@gmail.com
wrote:
Thanks,
Hello,
I found the document of ceph class usage is very few, below is the
only one which can almost address my needs--
http://ceph.com/rados/dynamic-object-interfaces-with-lua/
But still some questions confusing me left there:
1. How to make the OSD to load the class lib? or what's the process
Sorry for the late response, been backed up with other issues. It
certainly looks like a promising lead, I'll take a closer look at it.
Thanks!
Yehuda
On Fri, Jan 9, 2015 at 1:05 AM, baijia...@126.com baijia...@126.com wrote:
I patch the http://tracker.ceph.com/issues/8452
run s3 test suite
I patch the http://tracker.ceph.com/issues/8452
run s3 test suite and still is error;
err log: ERROR: failed to get obj attrs,
obj=test-client.0-31zepqoawd8dxfa-212:_multipart_mymultipart.2/0IQGoJ7hG8ZtTyfAnglChBO79HUsjeC.meta
ret=-2
I found code that it may has problem:
when function exec
hi, all
I have install calamari server, calamari client and diamond on a centos server,
Then i run the following command:
{code}
[root@centos65 content]# sudo calamari-ctl initialize
[INFO] Loading configuration..
[INFO] Starting/enabling salt...
[INFO] Starting/enabling postgres...
[INFO]
Can anybody help?
On Dec 1, 2014, at 11:37, mail list louis.hust...@gmail.com wrote:
hi, all
I have install calamari server, calamari client and diamond on a centos
server,
Then i run the following command:
{code}
[root@centos65 content]# sudo calamari-ctl initialize
[INFO] Loading
Hi Louis,
the page you mentioned originally is intended as a quick starter guide for
deploying the latest Ceph LTS release and that’s its sole purpose. For specific
and advanced ceph-deploy features and usage, there is a dedicated ceph-deploy
site right here: http://ceph.com/ceph-deploy/docs
hi all,
I want to install ceph using ceph-deploy following
http://docs.ceph.com/docs/master/start/quick-start-preflight/
And i want to use the latest version — giant, so i execute the following
commands:
{code}
louis@louis-Latitude-E5440:~/ceph/my-cluster$ wget -q -O-
Hi Louis,
ceph-deploy install —release=giant admin-node
Cheers
JC
On Nov 26, 2014, at 20:38, mail list louis.hust...@gmail.com wrote:
ceph-deploy install admin-node
___
ceph-users mailing list
ceph-users@lists.ceph.com
Thanks JC, It works, and i think ceph should modify the manual.
On Nov 27, 2014, at 13:59, Jean-Charles LOPEZ jc.lo...@inktank.com wrote:
Hi Louis,
ceph-deploy install —release=giant admin-node
Cheers
JC
On Nov 26, 2014, at 20:38, mail list louis.hust...@gmail.com wrote:
Hi, all
I create a rbd named foo, and then map it and mount on two different machine,
and when i touch a file on the machine A, machine B can not see the new file,
and machine B can also touch a same file!
I want to know if the rbd the same on machine A and B? or exactly they are two
rbd?
Any
: [ceph-users] Question about mount the same rbd in different machine
Hi, all
I create a rbd named foo, and then map it and mount on two different machine,
and when i touch a file on the machine A, machine B can not see the new file,
and machine B can also touch a same file!
I want to know
without
a cluster aware file system.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of mail
list
Sent: Tuesday, November 25, 2014 7:30 PM
To: ceph-us...@ceph.com
Subject: [ceph-users] Question about mount the same rbd in different machine
: Tuesday, November 25, 2014 8:11 PM
To: Michael Kuriger
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Question about mount the same rbd in different machine
hi,
But i have touched the same file on the two machine under the same rbi with no
error.
will it cause some problem or just
: [ceph-users] Question about mount the same rbd in different
machine
hi,
But i have touched the same file on the two machine under the same rbi with
no error.
will it cause some problem or just not suggested but can do?
On Nov 26, 2014, at 12:08, Michael Kuriger mk7...@yp.com wrote
[mailto:louis.hust...@gmail.com]
Sent: Tuesday, November 25, 2014 8:27 PM
To: Michael Kuriger
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Question about mount the same rbd in different machine
Hi Michael,
I write the same file with different content, and there is no hint for
overwrite, so when
file system.
Hope that helps!
-Mike
-Original Message-
From: mail list [mailto:louis.hust...@gmail.com]
Sent: Tuesday, November 25, 2014 8:27 PM
To: Michael Kuriger
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Question about mount the same rbd in different
machine
101 - 200 of 281 matches
Mail list logo