Re: [ceph-users] pgs not deep-scrubbed for 86400

2017-07-19 Thread Gencer W . Genç
Exactly have this issue (or not?) at the moment. Mine says "906 pgs not
scrubbed for 86400". But it is decrementing slowly (very slowly).

 

I cannot find any documentation for exact "pgs not srubbed for" phrase on
the web but only this.

 

Log like this:

 

2017-07-19 15:05:10.125041 [INF]  3.5e scrub ok 

2017-07-19 15:05:10.123522 [INF]  3.5e scrub starts 

2017-07-19 15:05:14.613124 [WRN]  Health check update: 914 pgs not scrubbed
for 86400 (PG_NOT_SCRUBBED) 

2017-07-19 15:05:07.433748 [INF]  1.c4 scrub ok

...

 

Should this be scared us?

 

Gencer.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
>> Not for 10GbE, but for public vs cluster network, for example:

Applied. Thanks!

>> Then I'm not sure what to expect... probably poor performance with sync 
>> writes on filestore, and not sure what would happen with
>> bluestore...
>> probably much better than filestore though if you use a large block size.

At the moment, It looks good but, can you explain a bit more on block size? (or 
a reference page could also work)

Gencer.

-Original Message-
From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de] 
Sent: Tuesday, July 18, 2017 5:59 PM
To: Gencer W. Genç <gen...@gencgiyen.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

On 07/18/17 14:10, Gencer W. Genç wrote:
>>> Are you sure? Your config didn't show this.
> Yes. I have dedicated 10GbE network between ceph nodes. Each ceph node has 
> seperate network that have 10GbE network card and speed. Do I have to set 
> anything in the config for 10GbE?
Not for 10GbE, but for public vs cluster network, for example:

> public network = 10.10.10.0/24
> cluster network = 10.10.11.0/24

Mainly this is for replication performance.

And using jumbo frames (high MTU, like 9000, on hosts and higher on
switches) also increases performance a bit (especially on slow CPUs in theory). 
That's also not in the ceph.conf.

>>> What kind of devices are they? did you do the journal test?
> They are not connected via NVMe neither SSD's. Each node has 10x3TB SATA Hard 
> Disk Drives (HDD).
Then I'm not sure what to expect... probably poor performance with sync writes 
on filestore, and not sure what would happen with bluestore...
probably much better than filestore though if you use a large block size.
>
>
> -Gencer.
>
>
> -Original Message-
> From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de]
> Sent: Tuesday, July 18, 2017 2:47 PM
> To: gen...@gencgiyen.com
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Yet another performance tuning for CephFS
>
> On 07/17/17 22:49, gen...@gencgiyen.com wrote:
>> I have a seperate 10GbE network for ceph and another for public.
>>
> Are you sure? Your config didn't show this.
>
>> No they are not NVMe, unfortunately.
>>
> What kind of devices are they? did you do the journal test?
> http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-s
> sd-is-suitable-as-a-journal-device/
>
> Unlike most tests, with ceph journals, you can't look at the load on the 
> device and decide it's not the bottleneck; you have to test it another way. I 
> had some micron SSDs I tested which performed poorly, and that test showed 
> them performing poorly too. But from other benchmarks, and disk load during 
> journal tests, they looked ok, which was misleading.
>> Do you know any test command that i can try to see if this is the max.
>> Read speed from rsync?
> I don't know how you can improve your rsync test.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
>> Are you sure? Your config didn't show this.

Yes. I have dedicated 10GbE network between ceph nodes. Each ceph node has 
seperate network that have 10GbE network card and speed. Do I have to set 
anything in the config for 10GbE?

>> What kind of devices are they? did you do the journal test?
They are not connected via NVMe neither SSD's. Each node has 10x3TB SATA Hard 
Disk Drives (HDD).


-Gencer.


-Original Message-
From: Peter Maloney [mailto:peter.malo...@brockmann-consult.de] 
Sent: Tuesday, July 18, 2017 2:47 PM
To: gen...@gencgiyen.com
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

On 07/17/17 22:49, gen...@gencgiyen.com wrote:
> I have a seperate 10GbE network for ceph and another for public.
>
Are you sure? Your config didn't show this.

> No they are not NVMe, unfortunately.
>
What kind of devices are they? did you do the journal test?
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/

Unlike most tests, with ceph journals, you can't look at the load on the device 
and decide it's not the bottleneck; you have to test it another way. I had some 
micron SSDs I tested which performed poorly, and that test showed them 
performing poorly too. But from other benchmarks, and disk load during journal 
tests, they looked ok, which was misleading.
> Do you know any test command that i can try to see if this is the max.
> Read speed from rsync?
I don't know how you can improve your rsync test.
>
> Because I tried one thing a few minutes ago. I opened 4 ssh channel 
> and run rsync command and copy bigfile to different targets in cephfs 
> at the same time. Then i looked into network graphs and i see numbers 
> up to 1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? What 
> prevents it im really wonder this.
>
> Gencer.
>
> On 2017-07-17 23:24, Peter Maloney wrote:
>> You should have a separate public and cluster network. And journal or 
>> wal/db performance is important... are the devices fast NVMe?
>>
>> On 07/17/17 21:31, gen...@gencgiyen.com wrote:
>>
>>> Hi,
>>>
>>> I located and applied almost every different tuning setting/config 
>>> over the internet. I couldn’t manage to speed up my speed one byte 
>>> further. It is always same speed whatever I do.
>>>
>>> I was on jewel, now I tried BlueStore on Luminous. Still exact same 
>>> speed I gain from cephfs.
>>>
>>> It doesn’t matter if I disable debug log, or remove [osd] section as 
>>> below and re-add as below (see .conf). Results are exactly the same. 
>>> Not a single byte is gained from those tunings. I also did tuning 
>>> for kernel (sysctl.conf).
>>>
>>> Basics:
>>>
>>> I have 2 nodes with 10 OSD each and each OSD is 3TB SATA drive. Each 
>>> node has 24 cores and 64GB of RAM. Ceph nodes are connected via 
>>> 10GbE NIC. No FUSE used. But tried that too. Same results.
>>>
>>> $ dd if=/dev/zero of=/mnt/c/testfile bs=100M count=10 oflag=direct
>>>
>>> 10+0 records in
>>>
>>> 10+0 records out
>>>
>>> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 5.77219 s, 182 MB/s
>>>
>>> 182MB/s. This is the best speed i get so far. Usually 170~MB/s. Hm..
>>> I get much much much higher speeds on different filesystems. Even 
>>> with glusterfs. Is there anything I can do or try?
>>>
>>> Read speed is also around 180-220MB/s but not higher.
>>>
>>> This is What I am using on ceph.conf:
>>>
>>> [global]
>>>
>>> fsid = d7163667-f8c5-466b-88df-8747b26c91df
>>>
>>> mon_initial_members = server1
>>>
>>> mon_host = 192.168.0.1
>>>
>>> auth_cluster_required = cephx
>>>
>>> auth_service_required = cephx
>>>
>>> auth_client_required = cephx
>>>
>>> osd mount options = rw,noexec,nodev,noatime,nodiratime,nobarrier
>>>
>>> osd mount options xfs = rw,noexec,nodev,noatime,nodiratime,nobarrier
>>>
>>>
>>> osd_mkfs_type = xfs
>>>
>>> osd pool default size = 2
>>>
>>> enable experimental unrecoverable data corrupting features = 
>>> bluestore rocksdb
>>>
>>> bluestore fsck on mount = true
>>>
>>> rbd readahead disable after bytes = 0
>>>
>>> rbd readahead max bytes = 4194304
>>>
>>> log to syslog = false
>>>
>>> debug_lockdep = 0/0
>>>
>>> debug_context = 0/0
>>>
>>> debug_crush = 0/0
>>

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
Patrick,

I did timing tests. Rsync is not a tools that should I trust for speed test. I 
simply do "cp" and extra write tests to ceph cluster. It is very very fast 
indeed. Rsync itself copies an 1GB file slowly and it takes 5-7 seconds to 
complete. Cp itself does it in 0,901s. (Not even 1 second).

So false alarm here. Ceph is fast enough. I also do stress tests (such as 
multiple background write at the same time) and they are very stable too.

Thanks for the heads up to you and all others.

Gencer.

-Original Message-
From: Patrick Donnelly [mailto:pdonn...@redhat.com] 
Sent: Monday, July 17, 2017 11:21 PM
To: gen...@gencgiyen.com
Cc: Ceph Users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

On Mon, Jul 17, 2017 at 1:08 PM,  <gen...@gencgiyen.com> wrote:
> But lets try another. Lets say i have a file in my server which is 
> 5GB. If i do this:
>
> $ rsync ./bigfile /mnt/cephfs/targetfile --progress
>
> Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected?

Perhaps that is the bandwidth limit of your local device rsync is reading from?

--
Patrick Donnelly

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Gencer W . Genç
I have 3 pools.

 

0 rbd,1 cephfs_data,2 cephfs_metadata

 

cephfs_data has 1024 as a pg_num, total pg number is 2113

 

POOL_NAME   USED   OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND 
DEGRADED RD_OPS RDWR_OPS WR

cephfs_data  4000M1000  0   2000  0   0
0  2 0  27443 44472M

cephfs_metadata 11505k  24  0 48  0   0
0 38 8456k   7384 14719k

rbd  0   0  0  0  0   0
0  0 0  0  0

 

total_objects1024

total_used   30575M

total_avail  55857G

total_space  55887G

 

 

 

 

 

From: David Turner [mailto:drakonst...@gmail.com] 
Sent: Tuesday, July 18, 2017 2:31 AM
To: Gencer Genç <gen...@gencgiyen.com>; Patrick Donnelly <pdonn...@redhat.com>
Cc: Ceph Users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

 

What are your pool settings? That can affect your read/write speeds as much as 
anything in the ceph.conf file.

 

On Mon, Jul 17, 2017, 4:55 PM Gencer Genç <gen...@gencgiyen.com 
<mailto:gen...@gencgiyen.com> > wrote:

I don't think so.

Because I tried one thing a few minutes ago. I opened 4 ssh channel and
run rsync command and copy bigfile to different targets in cephfs at the
same time. Then i looked into network graphs and i see numbers up to
1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? What
prevents it im really wonder this.

Gencer.


-Original Message-
From: Patrick Donnelly [mailto:pdonn...@redhat.com <mailto:pdonn...@redhat.com> 
]
Sent: 17 Temmuz 2017 Pazartesi 23:21
To: gen...@gencgiyen.com <mailto:gen...@gencgiyen.com> 
Cc: Ceph Users <ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> >
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

On Mon, Jul 17, 2017 at 1:08 PM,  <gen...@gencgiyen.com 
<mailto:gen...@gencgiyen.com> > wrote:
> But lets try another. Lets say i have a file in my server which is 5GB. If i
> do this:
>
> $ rsync ./bigfile /mnt/cephfs/targetfile --progress
>
> Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected?

Perhaps that is the bandwidth limit of your local device rsync is reading from?

--
Patrick Donnelly

___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread Gencer Genç
I don't think so.

Because I tried one thing a few minutes ago. I opened 4 ssh channel and 
run rsync command and copy bigfile to different targets in cephfs at the 
same time. Then i looked into network graphs and i see numbers up to 
1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? What 
prevents it im really wonder this.

Gencer.


-Original Message-
From: Patrick Donnelly [mailto:pdonn...@redhat.com] 
Sent: 17 Temmuz 2017 Pazartesi 23:21
To: gen...@gencgiyen.com
Cc: Ceph Users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Yet another performance tuning for CephFS

On Mon, Jul 17, 2017 at 1:08 PM,  <gen...@gencgiyen.com> wrote:
> But lets try another. Lets say i have a file in my server which is 5GB. If i
> do this:
>
> $ rsync ./bigfile /mnt/cephfs/targetfile --progress
>
> Then i see max. 200 mb/s. I think it is still slow :/ Is this an expected?

Perhaps that is the bandwidth limit of your local device rsync is reading from?

-- 
Patrick Donnelly

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread gencer

I have a seperate 10GbE network for ceph and another for public.

No they are not NVMe, unfortunately.

Do you know any test command that i can try to see if this is the max. 
Read speed from rsync?


Because I tried one thing a few minutes ago. I opened 4 ssh channel and 
run rsync command and copy bigfile to different targets in cephfs at the 
same time. Then i looked into network graphs and i see numbers up to 
1.09 gb/s. But why single copy/rsync cannot exceed 200mb/s? What 
prevents it im really wonder this.


Gencer.

On 2017-07-17 23:24, Peter Maloney wrote:

You should have a separate public and cluster network. And journal or
wal/db performance is important... are the devices fast NVMe?

On 07/17/17 21:31, gen...@gencgiyen.com wrote:


Hi,

I located and applied almost every different tuning setting/config
over the internet. I couldn’t manage to speed up my speed one byte
further. It is always same speed whatever I do.

I was on jewel, now I tried BlueStore on Luminous. Still exact same
speed I gain from cephfs.

It doesn’t matter if I disable debug log, or remove [osd] section
as below and re-add as below (see .conf). Results are exactly the
same. Not a single byte is gained from those tunings. I also did
tuning for kernel (sysctl.conf).

Basics:

I have 2 nodes with 10 OSD each and each OSD is 3TB SATA drive. Each
node has 24 cores and 64GB of RAM. Ceph nodes are connected via
10GbE NIC. No FUSE used. But tried that too. Same results.

$ dd if=/dev/zero of=/mnt/c/testfile bs=100M count=10 oflag=direct

10+0 records in

10+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 5.77219 s, 182 MB/s

182MB/s. This is the best speed i get so far. Usually 170~MB/s. Hm..
I get much much much higher speeds on different filesystems. Even
with glusterfs. Is there anything I can do or try?

Read speed is also around 180-220MB/s but not higher.

This is What I am using on ceph.conf:

[global]

fsid = d7163667-f8c5-466b-88df-8747b26c91df

mon_initial_members = server1

mon_host = 192.168.0.1

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd mount options = rw,noexec,nodev,noatime,nodiratime,nobarrier

osd mount options xfs = rw,noexec,nodev,noatime,nodiratime,nobarrier


osd_mkfs_type = xfs

osd pool default size = 2

enable experimental unrecoverable data corrupting features =
bluestore rocksdb

bluestore fsck on mount = true

rbd readahead disable after bytes = 0

rbd readahead max bytes = 4194304

log to syslog = false

debug_lockdep = 0/0

debug_context = 0/0

debug_crush = 0/0

debug_buffer = 0/0

debug_timer = 0/0

debug_filer = 0/0

debug_objecter = 0/0

debug_rados = 0/0

debug_rbd = 0/0

debug_journaler = 0/0

debug_objectcatcher = 0/0

debug_client = 0/0

debug_osd = 0/0

debug_optracker = 0/0

debug_objclass = 0/0

debug_filestore = 0/0

debug_journal = 0/0

debug_ms = 0/0

debug_monc = 0/0

debug_tp = 0/0

debug_auth = 0/0

debug_finisher = 0/0

debug_heartbeatmap = 0/0

debug_perfcounter = 0/0

debug_asok = 0/0

debug_throttle = 0/0

debug_mon = 0/0

debug_paxos = 0/0

debug_rgw = 0/0

[osd]

osd max write size = 512

osd client message size cap = 2147483648

osd mount options xfs = rw,noexec,nodev,noatime,nodiratime,nobarrier


filestore xattr use omap = true

osd_op_threads = 8

osd disk threads = 4

osd map cache size = 1024

filestore_queue_max_ops = 25000

filestore_queue_max_bytes = 10485760

filestore_queue_committing_max_ops = 5000

filestore_queue_committing_max_bytes = 1048576

journal_max_write_entries = 1000

journal_queue_max_ops = 3000

journal_max_write_bytes = 1048576000

journal_queue_max_bytes = 1048576000

filestore_max_sync_interval = 15

filestore_merge_threshold = 20

filestore_split_multiple = 2

osd_enable_op_tracker = false

filestore_wbthrottle_enable = false

osd_client_message_size_cap = 0

osd_client_message_cap = 0

filestore_fd_cache_size = 64

filestore_fd_cache_shards = 32

filestore_op_threads = 12

As I stated above, it doesn’t matter if I have this [osd] section
or not. Results are same.

I am open to all suggestions.

Thanks,

Gencer.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread gencer

Hi Patrick.

Thank you for prompt response.

I added ceph.conf file but i think you missed it.

These are the configs i tuned: (also i disabled debug logs in global 
section). Correct me if i understand you wrongly on this.


Btw before i gave you config i want to answer on sync io. Yes if i 
remove oflag then it goes to 1.1gb/s. Very fast indeed.


But lets try another. Lets say i have a file in my server which is 5GB. 
If i do this:


$ rsync ./bigfile /mnt/cephfs/targetfile --progress

Then i see max. 200 mb/s. I think it is still slow :/ Is this an 
expected?


Am i doing something wrong here?

Anyway, here is configs for osd i tried to tune.

[osd]

osd max write size = 512

osd client message size cap = 2147483648

osd mount options xfs = rw,noexec,nodev,noatime,nodiratime,nobarrier

filestore xattr use omap = true

osd_op_threads = 8

osd disk threads = 4

osd map cache size = 1024

filestore_queue_max_ops = 25000

filestore_queue_max_bytes = 10485760

filestore_queue_committing_max_ops = 5000

filestore_queue_committing_max_bytes = 1048576

journal_max_write_entries = 1000

journal_queue_max_ops = 3000

journal_max_write_bytes = 1048576000

journal_queue_max_bytes = 1048576000

filestore_max_sync_interval = 15

filestore_merge_threshold = 20

filestore_split_multiple = 2

osd_enable_op_tracker = false

filestore_wbthrottle_enable = false

osd_client_message_size_cap = 0

osd_client_message_cap = 0

filestore_fd_cache_size = 64

filestore_fd_cache_shards = 32

filestore_op_threads = 12






On 2017-07-17 22:41, Patrick Donnelly wrote:

Hi Gencer,

On Mon, Jul 17, 2017 at 12:31 PM,  <gen...@gencgiyen.com> wrote:
I located and applied almost every different tuning setting/config 
over the
internet. I couldn’t manage to speed up my speed one byte further. It 
is

always same speed whatever I do.


I believe you're frustrated but this type of information isn't really
helpful. Instead tell us which config settings you've tried tuning.

I have 2 nodes with 10 OSD each and each OSD is 3TB SATA drive. Each 
node
has 24 cores and 64GB of RAM. Ceph nodes are connected via 10GbE NIC. 
No

FUSE used. But tried that too. Same results.



$ dd if=/dev/zero of=/mnt/c/testfile bs=100M count=10 oflag=direct


This looks like your problem: don't use oflag=direct. That will cause
CephFS to do synchronous I/O at great cost to performance in order to
avoid buffering by the client.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Yet another performance tuning for CephFS

2017-07-17 Thread gencer
Hi,

 

I located and applied almost every different tuning setting/config over the
internet. I couldn't manage to speed up my speed one byte further. It is
always same speed whatever I do.

 

I was on jewel, now I tried BlueStore on Luminous. Still exact same speed I
gain from cephfs.

 

It doesn't matter if I disable debug log, or remove [osd] section as below
and re-add as below (see .conf). Results are exactly the same. Not a single
byte is gained from those tunings. I also did tuning for kernel
(sysctl.conf).

 

Basics:

 

I have 2 nodes with 10 OSD each and each OSD is 3TB SATA drive. Each node
has 24 cores and 64GB of RAM. Ceph nodes are connected via 10GbE NIC. No
FUSE used. But tried that too. Same results.

 

$ dd if=/dev/zero of=/mnt/c/testfile bs=100M count=10 oflag=direct

10+0 records in

10+0 records out

1048576000 bytes (1.0 GB, 1000 MiB) copied, 5.77219 s, 182 MB/s

 

182MB/s. This is the best speed i get so far. Usually 170~MB/s. Hm.. I get
much much much higher speeds on different filesystems. Even with glusterfs.
Is there anything I can do or try?

 

Read speed is also around 180-220MB/s but not higher.

 

This is What I am using on ceph.conf:

 

[global]

fsid = d7163667-f8c5-466b-88df-8747b26c91df

mon_initial_members = server1

mon_host = 192.168.0.1

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

 

osd mount options = rw,noexec,nodev,noatime,nodiratime,nobarrier

osd mount options xfs = rw,noexec,nodev,noatime,nodiratime,nobarrier

osd_mkfs_type = xfs

 

osd pool default size = 2

enable experimental unrecoverable data corrupting features = bluestore
rocksdb

bluestore fsck on mount = true

rbd readahead disable after bytes = 0

rbd readahead max bytes = 4194304

 

log to syslog = false

debug_lockdep = 0/0

debug_context = 0/0

debug_crush = 0/0

debug_buffer = 0/0

debug_timer = 0/0

debug_filer = 0/0

debug_objecter = 0/0

debug_rados = 0/0

debug_rbd = 0/0

debug_journaler = 0/0

debug_objectcatcher = 0/0

debug_client = 0/0

debug_osd = 0/0

debug_optracker = 0/0

debug_objclass = 0/0

debug_filestore = 0/0

debug_journal = 0/0

debug_ms = 0/0

debug_monc = 0/0

debug_tp = 0/0

debug_auth = 0/0

debug_finisher = 0/0

debug_heartbeatmap = 0/0

debug_perfcounter = 0/0

debug_asok = 0/0

debug_throttle = 0/0

debug_mon = 0/0

debug_paxos = 0/0

debug_rgw = 0/0

 

 

[osd]

osd max write size = 512

osd client message size cap = 2147483648

osd mount options xfs = rw,noexec,nodev,noatime,nodiratime,nobarrier

filestore xattr use omap = true

osd_op_threads = 8

osd disk threads = 4

osd map cache size = 1024

filestore_queue_max_ops = 25000

filestore_queue_max_bytes = 10485760

filestore_queue_committing_max_ops = 5000

filestore_queue_committing_max_bytes = 1048576

journal_max_write_entries = 1000

journal_queue_max_ops = 3000

journal_max_write_bytes = 1048576000

journal_queue_max_bytes = 1048576000

filestore_max_sync_interval = 15

filestore_merge_threshold = 20

filestore_split_multiple = 2

osd_enable_op_tracker = false

filestore_wbthrottle_enable = false

osd_client_message_size_cap = 0

osd_client_message_cap = 0

filestore_fd_cache_size = 64

filestore_fd_cache_shards = 32

filestore_op_threads = 12

 

 

As I stated above, it doesn't matter if I have this [osd] section or not.
Results are same.

 

I am open to all suggestions.

 

Thanks,

Gencer.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Update!

Yeah, That was the problem. I zap the disks (purge) and re-create them 
according to official documentation. Now everything is OK.

I can see all disk and total sizes properly.

Let's see if this will bring any performance improvements if we compare to 
previous standard schema (usinbg jewel).

Thanks!,
Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 6:17 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 17:03 schreef gen...@gencgiyen.com:
> 
> 
> I used this methods:
> 
> $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 
>  (one from 09th server one from 10th server..)
> 
> and then;
> 
> $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...
> 

You should use a whole disk, not a partition. So /dev/sdb without the '1'  at 
the end.

> This is my second creation for ceph cluster. At first I used bluestore. This 
> time i did not use bluestore (also removed from conf file). Still seen as 
> 200GB.
> 
> How can I make sure BlueStore is disabled (even if i not put any command).
> 

Just use BlueStore with Luminous as all testing is welcome! But in this case 
you invoked the command with the wrong parameters.

Wido

> -Gencer.
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 5:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi Wido,
> > 
> > Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> > 
> > First let me gave you df -h:
> > 
> > /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> > /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> > /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> > /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> > /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> > /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> > /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> > /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> > /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> > /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> > 
> > 
> > Then here is my results from ceph df commands:
> > 
> > ceph df
> > 
> > GLOBAL:
> > SIZE AVAIL RAW USED %RAW USED
> > 200G  179G   21381M 10.44
> > POOLS:
> > NAMEID USED %USED MAX AVAIL OBJECTS
> > rbd 0 0 086579M   0
> > cephfs_data 1 0 086579M   0
> > cephfs_metadata 2  2488 086579M  21
> > 
> 
> Ok, that's odd. But I think these disks are using BlueStore since that's what 
> Luminous defaults to.
> 
> The partitions seem to be mixed up, so can you check on how you created the 
> OSDs? Was that with ceph-disk? If so, what additional arguments did you use?
> 
> Wido
> 
> > ceph osd df
> > ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
> >  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
> >  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
> >  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
> >  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
> >  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> > 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> > 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> > 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> > 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
> >  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
> >  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
> >  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
> >  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
> >  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> > 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> > 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> > 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> > 19 0.00980  1.0 10240M  106

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
When I use /dev/sdb or /dev/sdc (the whole disk) i get errors like this:

ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device 
/dev/sdb: Line is truncated:
  RuntimeError: command returned non-zero exit status: 1
  RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate 
--mark-init systemd --mount /dev/sdb

Are you sure that we need to remove "1" from at the end?

Can you point me on any doc for this because ceph's own documentation also 
shows as sdb1 sdc1...

If you have any sample, I will be very happy :)

-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 6:17 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 17:03 schreef gen...@gencgiyen.com:
> 
> 
> I used this methods:
> 
> $ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 
>  (one from 09th server one from 10th server..)
> 
> and then;
> 
> $ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...
> 

You should use a whole disk, not a partition. So /dev/sdb without the '1'  at 
the end.

> This is my second creation for ceph cluster. At first I used bluestore. This 
> time i did not use bluestore (also removed from conf file). Still seen as 
> 200GB.
> 
> How can I make sure BlueStore is disabled (even if i not put any command).
> 

Just use BlueStore with Luminous as all testing is welcome! But in this case 
you invoked the command with the wrong parameters.

Wido

> -Gencer.
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 5:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi Wido,
> > 
> > Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> > 
> > First let me gave you df -h:
> > 
> > /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> > /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> > /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> > /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> > /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> > /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> > /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> > /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> > /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> > /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> > 
> > 
> > Then here is my results from ceph df commands:
> > 
> > ceph df
> > 
> > GLOBAL:
> > SIZE AVAIL RAW USED %RAW USED
> > 200G  179G   21381M 10.44
> > POOLS:
> > NAMEID USED %USED MAX AVAIL OBJECTS
> > rbd 0 0 086579M   0
> > cephfs_data 1 0 086579M   0
> > cephfs_metadata 2  2488 086579M  21
> > 
> 
> Ok, that's odd. But I think these disks are using BlueStore since that's what 
> Luminous defaults to.
> 
> The partitions seem to be mixed up, so can you check on how you created the 
> OSDs? Was that with ceph-disk? If so, what additional arguments did you use?
> 
> Wido
> 
> > ceph osd df
> > ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
> >  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
> >  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
> >  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
> >  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
> >  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> > 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> > 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> > 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> > 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> > 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
> >  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
> >  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
> >  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
> >  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
> >  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> > 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.0

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Also one more thing, If I want to use BlueStore how do I let it to know that I 
have more space? Do i need to specify a size at any point?

-Gencer.

-Original Message-
From: gen...@gencgiyen.com [mailto:gen...@gencgiyen.com] 
Sent: Monday, July 17, 2017 6:04 PM
To: 'Wido den Hollander' <w...@42on.com>; 'ceph-users@lists.ceph.com' 
<ceph-users@lists.ceph.com>
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong

I used this methods:

$ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1  (one 
from 09th server one from 10th server..)

and then;

$ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...

This is my second creation for ceph cluster. At first I used bluestore. This 
time i did not use bluestore (also removed from conf file). Still seen as 200GB.

How can I make sure BlueStore is disabled (even if i not put any command).

-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 5:57 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> 
> 
> Hi Wido,
> 
> Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> 
> First let me gave you df -h:
> 
> /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> 
> 
> Then here is my results from ceph df commands:
> 
> ceph df
> 
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 200G  179G   21381M 10.44
> POOLS:
> NAMEID USED %USED MAX AVAIL OBJECTS
> rbd 0 0 086579M   0
> cephfs_data 1 0 086579M   0
> cephfs_metadata 2  2488 086579M  21
> 

Ok, that's odd. But I think these disks are using BlueStore since that's what 
Luminous defaults to.

The partitions seem to be mixed up, so can you check on how you created the 
OSDs? Was that with ceph-disk? If so, what additional arguments did you use?

Wido

> ceph osd df
> ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
>  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
>  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
>  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
>  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
>  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
>  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
>  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
>  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
>  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
>  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> 19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
>   TOTAL   200G 21381M  179G 10.44
> MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00
> 
> 
> -Gencer.
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 4:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi,
> > 
> >  
> > 
> > I successfully managed to work with ceph jewel. Want to try luminous.
> > 
> >  
> > 
> > I also set experimental bluestore while creating osds. Problem is, I 
> > have 20x3TB hdd in two nodes and i would expect 55TB usable (as on
> > jewel) on luminous but i see 20

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
I used this methods:

$ ceph-deploy osd prepare sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1  (one 
from 09th server one from 10th server..)

and then;

$ ceph-deploy osd activate sr-09-01-18:/dev/sdb1 sr-10-01-18:/dev/sdb1 ...

This is my second creation for ceph cluster. At first I used bluestore. This 
time i did not use bluestore (also removed from conf file). Still seen as 200GB.

How can I make sure BlueStore is disabled (even if i not put any command).

-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 5:57 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: RE: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 16:41 schreef gen...@gencgiyen.com:
> 
> 
> Hi Wido,
> 
> Each disk is 3TB SATA (2.8TB seen) but what I got is this:
> 
> First let me gave you df -h:
> 
> /dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
> /dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
> /dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
> /dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
> /dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
> /dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
> /dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
> /dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
> /dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
> /dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18
> 
> 
> Then here is my results from ceph df commands:
> 
> ceph df
> 
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 200G  179G   21381M 10.44
> POOLS:
> NAMEID USED %USED MAX AVAIL OBJECTS
> rbd 0 0 086579M   0
> cephfs_data 1 0 086579M   0
> cephfs_metadata 2  2488 086579M  21
> 

Ok, that's odd. But I think these disks are using BlueStore since that's what 
Luminous defaults to.

The partitions seem to be mixed up, so can you check on how you created the 
OSDs? Was that with ceph-disk? If so, what additional arguments did you use?

Wido

> ceph osd df
> ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
>  0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
>  2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
>  4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
>  6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
>  8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
> 10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
> 12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
> 14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
> 18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
>  1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
>  3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
>  5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
>  7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
>  9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
> 11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
> 13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
> 15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
> 17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
> 19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
>   TOTAL   200G 21381M  179G 10.44
> MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00
> 
> 
> -Gencer.
> 
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: Monday, July 17, 2017 4:57 PM
> To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
> Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong
> 
> 
> > Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> > 
> > 
> > Hi,
> > 
> >  
> > 
> > I successfully managed to work with ceph jewel. Want to try luminous.
> > 
> >  
> > 
> > I also set experimental bluestore while creating osds. Problem is, I 
> > have 20x3TB hdd in two nodes and i would expect 55TB usable (as on
> > jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB 
> > space available in total. I see all osds are up and in.
> > 
> >  
> > 
> > 20 osd up; 20 osd in. 0 down.
> > 
> >  
> > 
> > Ceph -s shows HEALTH_OK. I have only one monitor and one mds. 
> > (1/1/1) and it is up:active.
> > 
> >  
> > 
> > ceph osd tree gave me all OSDs in nodes are up and results are 
> > 1.

Re: [ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Hi Wido,

Each disk is 3TB SATA (2.8TB seen) but what I got is this:

First let me gave you df -h:

/dev/sdb1   2.8T  754M  2.8T   1% /var/lib/ceph/osd/ceph-0
/dev/sdc1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-2
/dev/sdd1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-4
/dev/sde1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-6
/dev/sdf1   2.8T  753M  2.8T   1% /var/lib/ceph/osd/ceph-8
/dev/sdg1   2.8T  752M  2.8T   1% /var/lib/ceph/osd/ceph-10
/dev/sdh1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-12
/dev/sdi1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-14
/dev/sdj1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-16
/dev/sdk1   2.8T  751M  2.8T   1% /var/lib/ceph/osd/ceph-18


Then here is my results from ceph df commands:

ceph df

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
200G  179G   21381M 10.44
POOLS:
NAMEID USED %USED MAX AVAIL OBJECTS
rbd 0 0 086579M   0
cephfs_data 1 0 086579M   0
cephfs_metadata 2  2488 086579M  21

ceph osd df
ID WEIGHT  REWEIGHT SIZE   USEAVAIL %USE  VAR  PGS
 0 0.00980  1.0 10240M  1070M 9170M 10.45 1.00 173
 2 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 150
 4 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 148
 6 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 167
 8 0.00980  1.0 10240M  1069M 9171M 10.44 1.00 166
10 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 171
12 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 160
14 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
16 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 182
18 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 168
 1 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 167
 3 0.00980  1.0 10240M  1069M 9170M 10.45 1.00 156
 5 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 152
 7 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 158
 9 0.00980  1.0 10240M  1069M 9170M 10.44 1.00 174
11 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 153
13 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 179
15 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 186
17 0.00980  1.0 10240M  1068M 9171M 10.44 1.00 185
19 0.00980  1.0 10240M  1067M 9172M 10.43 1.00 154
  TOTAL   200G 21381M  179G 10.44
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.00


-Gencer.

-Original Message-
From: Wido den Hollander [mailto:w...@42on.com] 
Sent: Monday, July 17, 2017 4:57 PM
To: ceph-users@lists.ceph.com; gen...@gencgiyen.com
Subject: Re: [ceph-users] Ceph (Luminous) shows total_space wrong


> Op 17 juli 2017 om 15:49 schreef gen...@gencgiyen.com:
> 
> 
> Hi,
> 
>  
> 
> I successfully managed to work with ceph jewel. Want to try luminous.
> 
>  
> 
> I also set experimental bluestore while creating osds. Problem is, I 
> have 20x3TB hdd in two nodes and i would expect 55TB usable (as on 
> jewel) on luminous but i see 200GB. Ceph thinks I have only 200GB 
> space available in total. I see all osds are up and in.
> 
>  
> 
> 20 osd up; 20 osd in. 0 down.
> 
>  
> 
> Ceph -s shows HEALTH_OK. I have only one monitor and one mds. (1/1/1) 
> and it is up:active.
> 
>  
> 
> ceph osd tree gave me all OSDs in nodes are up and results are 
> 1.... I checked via df -h but all disks ahows 2.7TB. Basically something 
> is wrong.
> Same settings and followed schema on jewel is successful except luminous.
> 

What do these commands show:

- ceph df
- ceph osd df

Might be that you are looking at the wrong numbers.

Wido

>  
> 
> What might it be?
> 
>  
> 
> What do you need to know to solve this problem? Why ceph thinks I have 
> 200GB space only?
> 
>  
> 
> Thanks,
> 
> Gencer.
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph (Luminous) shows total_space wrong

2017-07-17 Thread gencer
Hi,

 

I successfully managed to work with ceph jewel. Want to try luminous.

 

I also set experimental bluestore while creating osds. Problem is, I have
20x3TB hdd in two nodes and i would expect 55TB usable (as on jewel) on
luminous but i see 200GB. Ceph thinks I have only 200GB space available in
total. I see all osds are up and in.

 

20 osd up; 20 osd in. 0 down.

 

Ceph -s shows HEALTH_OK. I have only one monitor and one mds. (1/1/1) and it
is up:active.

 

ceph osd tree gave me all OSDs in nodes are up and results are 1.... I
checked via df -h but all disks ahows 2.7TB. Basically something is wrong.
Same settings and followed schema on jewel is successful except luminous.

 

What might it be?

 

What do you need to know to solve this problem? Why ceph thinks I have 200GB
space only?

 

Thanks,

Gencer.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com