You need to resize the filesystem within the RBD block device.
On Wed, Jan 3, 2018 at 7:37 AM, 13605702...@163.com <13605702...@163.com> wrote:
> hi
>
> a rbd image is out of space (old size is 1GB), so i resize it to 10GB
>
> # rbd info rbd/test
> rbd image 'test':
> size 10240 MB in 2560
On 03. jan. 2018 14:51, James Poole wrote:
Hi all,
Whilst on a training course recently I was told that 'min_size' had an
affect on client write performance, in that it's the required number of
copies before ceph reports back to the client that an object has been
written therefore setting a
Hello,
I try to setup Ceph cluster on Ubuntu 16.04. I’ve setup 1 monitor-osd (hostname
mon01) and 2 osd hosts (osd01 and osd02). At one stage, I issued
ceph-deploy osd create mon01:sdb1 osd01:sdb1 osd02:sdb1
and ran successfully. But when I issued below from the admin host:
ssh
hi Jason
the data won't be lost if i resize the filesystem in the image?
thanks
13605702...@163.com
From: Jason Dillaman
Date: 2018-01-03 20:57
To: 13605702...@163.com
CC: ceph-users
Subject: Re: [ceph-users] question on rbd resize
You need to resize the filesystem within the RBD block
No, most filesystems can be expanded pretty trivially (shrinking is a more
complex operation but usually also doable). Assuming the likely case of an
ext2/3/4 filesystem, the command "resize2fs /dev/rbd0" should resize the FS to
cover the available space in the block device.
Rich
On 03/01/18
Can you provide more detail regarding the infrastructure backing this
environment? What hard drive, ssd, and processor are you using? Also, what
is providing networking?
I'm seeing 4k blocksize tests here. Latency is going to destroy you.
On Jan 3, 2018 8:11 AM, "Steven Vacaroaia"
Latest version. Oh yes, I omitted the manager daemon setup. Let me check..
Thank you..
From: Sergey Malinin [mailto:h...@newmail.com]
Sent: Wednesday, January 3, 2018 5:56 PM
To: Hüseyin Atatür YILDIRIM ;
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] "ceph -s"
On 3-1-2018 00:44, Dan Mick wrote:
> On 01/02/2018 08:54 AM, John Spray wrote:
>> On Tue, Jan 2, 2018 at 10:43 AM, Jan Fajerski wrote:
>>> Hi lists,
>>> Currently the ceph status output formats all numbers with binary unit
>>> prefixes, i.e. 1MB equals 1048576 bytes and an
What version are you using? Luminous needs mgr daemons running.
From: ceph-users on behalf of Hüseyin
Atatür YILDIRIM
Sent: Wednesday, January 3, 2018 5:15:30 PM
To: ceph-users@lists.ceph.com
Quoting Sage Weil (s...@newdream.net):
> Hi Stefan, Mehmet,
>
> Are these clusters that were upgraded from prior versions, or fresh
> luminous installs?
Fresh luminous install... The cluster was installed with
12.2.0, and later upgraded to 12.2.1 and 12.2.2.
> This message indicates that there
Is there a disadvantage to just always start pg_num and pgp_num with
something low like 8, and then later increase it when necessary?
Question is then how to identify when necessary
-Original Message-
From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
Sent: dinsdag 2
In some common cases (when you have lots of objects per pg) ceph will warn
about it.
2018-01-03 11:10 GMT+01:00 Marc Roos :
>
>
> Is there a disadvantage to just always start pg_num and pgp_num with
> something low like 8, and then later increase it when necessary?
>
>
hi
i'm using s3cmd ( version 1.6.1) to put object to ceph cluster (jewel
10.2.10),
and when i put the file with the same name, the older one is overrided.
i know that rgw supports bucket object version. But how can i enabled it
when using s3cmd?
thanks
13605702...@163.com
Nobody explains why, I will tell you from direct experience: the cache tier
has a block size of several megabytes. So if you ask for one byte that is
not in cache some megabytes are read from disk and, if cache is full, some
other megabytes are written from cache to the EC pool.
Il giorno gio 28
Last summer we increased an EC 8+3 pool from 1024 to 2048 PGs on our ~1500 OSD
(Kraken) cluster. This pool contained ~2 petabytes of data at the time.
We did a fair amount of testing on a throwaway pool on the same cluster
beforehand, starting with small increases (16/32/64).
The main
Hi!
I use the aws CLI tool, like this:
aws --endpoint-url=http://your-rgw:7480 s3api put-bucket-versioning
--bucket yourbucket --versioning-configuration Status=Enabled
I also set a lifecycle configuration to expire older versions, e.g.:
aws --endpoint-url=http://your-rgw:7480 s3api
To make device ownership persist over reboots, you can to set up udev rules.
The article you referenced seems to have nothing to do with bluestore. When you
had zapped /dev/sda, you zapped bluestore metadata stored on db partition so
newly created partitions, if they were created apart from
Seen this issue when I first created our Luminous cluster. I use a custom
systemd service to chown the DB and WAL partitions before ceph osd services get
started. The script in /usr/local/sbin just does the chowning.
ceph-nvme.service:
# This is a workaround to chown the rocksdb and wal
A while back there was a thread on the ML where someone posted a bash
script to slowly increase the number of PGs in steps of 256 AFAIR, the
script would monitor the cluster activity and once all data shuffling
had finished it would do another round until the target is hit.
That was on filestore
They were not
After I change it manually I was still unable to start the service
Further more, a reboot screed up permissions again
ls -al /dev/sda*
brw-rw 1 root disk 8, 0 Jan 3 11:10 /dev/sda
brw-rw 1 root disk 8, 1 Jan 3 11:10 /dev/sda1
brw-rw 1 root disk 8, 2 Jan 3 11:10
I would like to track down what objects are affected by an incomplete pg
and in the case of cephfs map those objects to file paths.
At the moment, the best I've come up with for mapping objects to a pg is
very very slow:
pool="pool"
incomplete="1.cb7"
for object in `rados -p ${pool} ls`; do
min_size will also block reads. Just to add a +1 to what has been said, a
write operation will always wait to ack until all osds for a PG have acked
the write. min_size has absolutely no affect on this. min_size is
calculated BEFORE the write or read is handled by any osds. If there is
not the
Hi,
After a reboot, all the partitions created on the SSD drive dissapeared
They were used by bluestore DB and WAL so the OSD are down
The following error message are in /var/log/messages
Jan 3 09:54:12 osd01 ceph-osd: 2018-01-03 09:54:12.992218 7f4b52b9ed00 -1
Hello all,
I have ceph Luminous setup with filestore and bluestore OSDs. This cluster
was deployed initially as Hammer, than I upgraded it to Jewel and
eventually to Luminous. It’s heterogenous, we have SSDs, SAS 15K and 7.2K
HDDs in it (see crush map attached). Earlier I converted 7.2K HDD from
It happens randomly.
Karun Josy
On Wed, Jan 3, 2018 at 7:07 AM, Jason Dillaman wrote:
> I tried to reproduce this for over an hour today using the specified
> versions w/o any success. Is this something that you can repeat
> on-demand or was this a one-time occurance?
>
>
In filestore (XFS), you'd find files representing objects using traditional
bash commands like find. What tools do I have at my disposal for recovering
data in bluestore?
___
ceph-users mailing list
ceph-users@lists.ceph.com
Thanks for your willingness to help
DELL R620, 1 CPU, 8 cores, 64 GB RAM
cluster network is using 2 bonded 10 GB NICs ( mode=4), MTU=9000
SSD drives are Enterprise grade - 400 GB SSD Toshiba PX04SHB040
HDD drives are - 10k RPM, 600 GB Toshiba AL13SEB600
Steven
On 3 January 2018 at 09:41,
Are actual devices (not only udev links) owned by user “ceph”?
From: ceph-users on behalf of Steven
Vacaroaia
Sent: Wednesday, January 3, 2018 6:19:45 PM
To: ceph-users
Subject: [ceph-users] ceph luminous -
Well, there is a setting for the minimum number of pgs per OSD (mon pg
warn min per osd, see
http://docs.ceph.com/docs/master/rados/configuration/pool-pg-config-ref/)
and there will be a HEALTH_WARN state if you have too few. As far as I
know having not enough PGs can cause trouble for CRUSH
I had the same problem before, mine is CentOS, and when I created
/iscsi/create iqn_bla-bla
it goes
ocal LIO instance already has LIO configured with a target - unable to
continue
then finally the solution happened to be, turn off target service
systemctl stop target
systemctl disable target
That script was mine and we were creating the PGs in chunks of 256 at a
time with nobackfill and norecover set until we added 4k PGs. We used the
script because of the amount of peering caused by adding thousands of PGs
at a time was causing problems for client io. We did that 4 times
(backfilling
Just want to point out as well that the first thing I did when noticing this
bug was to add the `ceph` user to the group `disk` thus giving it write
permission to the devices. However this did not actually work (haven't checked
in 12.2.2 yet), and I suspect that something in the ceph code was
Hello Sergey,
I issued the mgr create command and it fails with
ceph-deploy mgr create mon01
usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME]
[--overwrite-conf] [--cluster NAME] [--ceph-conf CEPH_CONF]
COMMAND ...
ceph-deploy: error:
hi
a rbd image is out of space (old size is 1GB), so i resize it to 10GB
# rbd info rbd/test
rbd image 'test':
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1169238e1f29
format: 2
features: layering
flags:
and then i remap and remount the image on the
Hi all,
Whilst on a training course recently I was told that 'min_size' had an
affect on client write performance, in that it's the required number of
copies before ceph reports back to the client that an object has been
written therefore setting a 'min_size' of 0 would only require a write
to be
Hi,
I am doing a PoC with 3 DELL R620 and 12 OSD , 3 SSD drives ( one on each
server), bluestore
I configured the OSD using the following ( /dev/sda is my SSD drive)
ceph-disk prepare --zap-disk --cluster ceph --bluestore /dev/sde
--block.wal /dev/sda --block.db /dev/sda
Unfortunately both fio
On 12/25/2017 03:13 PM, Joshua Chen wrote:
> Hello folks,
> I am trying to share my ceph rbd images through iscsi protocol.
>
> I am trying iscsi-gateway
> http://docs.ceph.com/docs/master/rbd/iscsi-overview/
>
>
> now
>
> systemctl start rbd-target-api
> is working and I could run gwcli
>
Hi Steven.
interesting... 'm quite curious after your post now.
I've migrated our prod. CEPH cluster to 12.2.2 and Bluestore just today and
haven't heard back anything "bad" from the applications/users so far.
performance tests on our test cluster were good before, but we use S3/RGW only
38 matches
Mail list logo