I have created a swift user, and can mount the object store with
cloudfuse, and can create files in the default pool .rgw.root
How can I have my test user go to a different pool and not use the
default .rgw.root?
Thanks,
Marc
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
I am looking a bit at ceph on a single node. Does anyone have experience
with cloudfuse?
Do I need to use the rados-gw? Does it even work with ceph?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -.
F1 Outsourcing Development Sp. z o.o.
Poland
t: +48 (0)124466845
f:
If I do a test on a 3 node, 1 osd per node cluster, 2xGbE mode 4 bonded,
on a pool with size 1. I see that the first node is sending streams to
2nd and 3rd node using only one of the bonded adapters.
This is typical for a 'single line of communication' using lacp. Afaik
the streams to the
I have a 3 node test cluster with one osd per node. And I write a file
to a pool with size 1. Why doesn’t ceph just use the full 110MB/s of
the network (as with default rados bench test)? Does ceph 'reserve'
bandwidth for other concurrent connections? Can this be tuned?
Putting from ram
I guess it is correct to assume, that if you have 11 osd's you have
around 11x11=121 established connections in your netstat -tanp?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is there a doc that describes all the parameters that are published by
collectd-ceph?
Is there maybe a default grafana dashboard for influxdb? I found
something for graphite, and modifying those.
-Original Message-
From: Patrick McGarry [mailto:pmcga...@redhat.com]
Sent:
For a test cluster, we like to use some 5400rpm and 7200rpm drives, is
it advisable to customize the configuration then as described on this
page. Or is the speed difference to so small, and should this only be
done when adding ssd's to the same osd node?
We are going to setup a test cluster with kraken using CentOS7. And
obviously like to stay as close as possible to using their repositories.
If we need to install the 4.1.4 kernel or later, is there a ceph
recommended repository to choose? Like for instance use the elrepo
4.9ml/4.4lt?
I would start with for CentOS7 because if you get into problems you can
always buy a redhat license and get support.
> On Thu, Dec 29, 2016 at 6:20 AM, Andre Forigato
>
> wrote:
>>
>> Hello,
>>
>> I'm starting to study Ceph for implementation in our company.
>>
>>
Is it possible to rsync to the ceph object store with something like
this tool of amazon?
https://aws.amazon.com/customerapps/1771
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -.
F1 Outsourcing Development Sp. z o.o.
Poland
t: +48 (0)124466845
f: +48 (0)124466843
e:
Hi Blair,
We are also thinking of using ceph for 'backup'. At the moment we are
using rsync and hardlinks on a drbd setup. But I think when using cephfs
things could speed up, because file information is gotten from the mds
daemon, so this should save on one rsync file lookup, and we expect
I have an error with a placement group, and seem to only find these
solutions based on a filesystem osd.
http://ceph.com/geen-categorie/ceph-manually-repair-object/
Anybody have a link to how can I do this with a bluestore osd?
/var/log/ceph/ceph-osd.9.log:48:2017-07-31 14:21:33.929855
FYI when creating these rgw pools, not all are automatically 'enabled
application'
I created these
ceph osd pool create default.rgw
ceph osd pool create default.rgw.meta
ceph osd pool create default.rgw.control
ceph osd pool create default.rgw.log
ceph osd pool create .rgw.root
ceph osd
I am not sure if I am the only one having this. But there is an issue
with the collectd plugin and the luminous release. I think I didn’t
have this in Kraken, looks like something changed in the JSON? I also
reported it here https://github.com/collectd/collectd/issues/2343, I
have no idea who
build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.1.1/rpm/el7/BUILD/ceph-12.1.1/src/rocksdb/db/db_impl.cc:343] Shutdown
complete
2017-08-09 11:41:25.686088 7f26db8ae100 1 bluefs umount
2017-08-09 11:41:25.705389 7f26db
No, but we are using Perl ;)
-Original Message-
From: Daniel Davidson [mailto:dani...@igb.illinois.edu]
Sent: donderdag 13 juli 2017 16:44
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Crashes Compiling Ruby
We have a weird issue. Whenever compiling Ruby, and only Ruby, on a
Does anyone have an idea, why I am having these osd_bytes=0?
ceph daemon mon.c perf dump cluster
{
"cluster": {
"num_mon": 3,
"num_mon_quorum": 3,
"num_osd": 6,
"num_osd_up": 6,
"num_osd_in": 6,
"osd_epoch": 3593,
"osd_bytes": 0,
Is it possible to change the cephfs meta data pool. I would like to
lower the pg's. And thought about just making a new pool, copying the
pool and then renaming them. But I guess cephfs works with the pool id
not? How can this be best done?
Thanks
When are bugs like these http://tracker.ceph.com/issues/20563 available
in the rpm repository
(https://download.ceph.com/rpm-luminous/el7/x86_64/)?
I sort of don’t get it from this page
http://docs.ceph.com/docs/master/releases/. Maybe something here could
specifically mentioned about the
I just updated packages on one CentOS7 node and getting these errors:
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510 7f4fa1c14e40 -1
WARNING: the following dangerous and experimental features are enabled:
bluestore
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510
We are running on
Linux c01 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.3.1611 (Core)
And didn’t have any issues installing/upgrading, but we are not using
ceph-deploy. In fact am surprised on how easy it is to install.
With ceph auth I have set permissions like below, I can add and delete
objects in the test pool, but cannot set size of a the test pool. What
permission do I need to add for this user to modify the size of this
test pool?
mon 'allow r' mds 'allow r' osd 'allow rwx pool=test'
I need a little help with fixing some errors I am having.
After upgrading from Kraken im getting incorrect values reported on
placement groups etc. At first I thought it is because I was changing
the public cluster ip address range and modifying the monmap directly.
But after deleting and
I just updated packages on one CentOS7 node and getting these errors.
Anybody an idea how to resolve this?
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510 7f4fa1c14e40 -1
WARNING: the following dangerous and experimental features are enabled:
bluestore
Jul 18 12:03:34 c01 ceph-mon:
I am running 12.1.1, and updated to it on the 18th. So I guess this is
either something else or it was not in the rpms.
-Original Message-
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: vrijdag 21 juli 2017 20:21
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Ceph
Should we report these?
[840094.519612] ceph[12010]: segfault at 8 ip 7f194fc8b4c3 sp
7f19491b6030 error 4 in libceph-common.so.0[7f194f9fb000+7e9000]
CentOS Linux release 7.3.1611 (Core)
Linux 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
x86_64 x86_64 x86_64
I would like to work on some grafana dashboards, but since the upgrade
to luminous rc, there seems to have changed something in json and (a lot
of) metrics are not stored in influxdb.
Does any one have an idea when updates to collectd-ceph in the epel repo
will be updated? Or is there some
Thanks! updating all indeed resolved this.
-Original Message-
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: dinsdag 18 juli 2017 23:01
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Updating 12.1.0 -> 12.1.1
Yeah, some of the message formats changed (incompati
I would recommend logging into the host and running your commands from a
screen session, so they keep running.
-Original Message-
From: Martin Wittwer [mailto:martin.witt...@datonus.ch]
Sent: zondag 23 juli 2017 15:20
To: ceph-us...@ceph.com
Subject: [ceph-users] Restore RBD image
I have updated a test cluster by just updating the rpm and issueing a
ceph osd require-osd-release because it was mentioned in the status. Is
there more you need to do?
- update on all nodes the packages
sed -i 's/Kraken/Luminous/g' /etc/yum.repos.d/ceph.repo
yum update
- then on each node
On a test cluster with 994GB used, via collectd I get in influxdb an
incorrect 9.3362651136e+10 (93GB) reported and this should be 933GB (or
actually 994GB). Cluster.osdBytes is reported correctly
3.3005833027584e+13 (30TB)
cluster:
health: HEALTH_OK
services:
mon: 3 daemons,
:31.339235 madvise(0x7f4a02102000, 32768, MADV_DONTNEED) = 0
<0.000014>
23552 16:26:31.339331 madvise(0x7f4a01df8000, 16384, MADV_DONTNEED) = 0
<0.19>
23552 16:26:31.339372 madvise(0x7f4a01df8000, 32768, MADV_DONTNEED) = 0
<0.13>
-Original Message-
From: Brad Hubbard
_impl.cc:343] Shutdown
complete
2017-08-09 11:41:25.686088 7f26db8ae100 1 bluefs umount
2017-08-09 11:41:25.705389 7f26db8ae100 1 bdev(0x7f26de472e00
/var/lib/ceph/osd/ceph-0/block) close
2017-08-09 11:41:25.944548 7f26db8ae100 1 bdev(0x7f26de2b3a00
/var/lib/ceph/osd/ceph-0/block) close
I have got a placement group inconsistency, and saw some manual where
you can export and import this on another osd. But I am getting an
export error on every osd.
What does this export_files error -5 actually mean? I thought 3 copies
should be enough to secure your data.
> PG_DAMAGED
:52
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Pg inconsistent / export_files error -5
It _should_ be enough. What happened in your cluster recently? Power
Outage, OSD failures, upgrade, added new hardware, any changes at all.
What is your Ceph version?
On Fri, Aug 4, 2017 at 11:22 AM
I tried to fix a 1 pg inconsistent by taking the osd 12 out, hoping for
the data to be copied to a different osd, and that one would be used as
'active?'.
- Would deleting the whole image in the rbd pool solve this? (or would
it fail because of this status)
- Should I have done this rather
Where can you get the nfs-ganesha-ceph rpm? Is there a repository that
has these?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
FYI, 5 or even more years ago I was trying zabbix and when I noticed
that when the monitored hosts increased, the load on the mysql server
was increasing. Without being able to recall exactly what was wrong (I
think every sample they did, was one insert statement), I do remember
that I got
No experience with it. But why not use linux for it? Maybe this solution
on every RGW is sufficient, I cannot imagine you need 3rd party for
this.
https://unix.stackexchange.com/questions/28198/how-to-limit-network-bandwidth
https://wiki.archlinux.org/index.php/Advanced_traffic_control
Just a thought, what about marking connections with iptables and using
that mark with tc?
-Original Message-
From: hrchu [mailto:petertc@gmail.com]
Sent: donderdag 4 mei 2017 10:35
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Limit bandwidth on RadosGW?
Thanks
/21/refresh
(I am trying to online increase the size via kvm, virtio disk in win
2016)
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: maandag 18 september 2017 22:42
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Rbd resize, refresh rescan
I've never nee
Is there something like this for scsi, to rescan the size of the rbd
device and make it available? (while it is being used)
echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
___
ceph-users mailing list
ceph-users@lists.ceph.com
We use these :
NVDATA Product ID : SAS9207-8i
Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308
PCI-Express Fusion-MPT SAS-2 (rev 05)
Does someone by any chance know how to turn on the drive identification
lights?
-Original Message-
From: Jake Young
In my case it was syncing, and was syncing slowly (hour or so?). You
should see this in the log file. I wanted to report this, because my
store.db is only 200MB, and I guess you want your monitors up and
running quickly.
I also noticed that when the 3rd monitor left the quorum, ceph -s
Rbd resize is automatically on the mapped host.
However for the changes to appear in libvirt/qemu, I have to
virsh qemu-monitor-command vps-test2 --hmp "info block"
virsh qemu-monitor-command vps-test2 --hmp "block_resize
drive-scsi0-0-0-0 12G"
-Original Message-
Afaik ceph is is not supporting/working with bonding.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
(thread: Maybe some tuning for bonded network adapters)
-Original Message-
From: Andreas Herrmann [mailto:andr...@mx20.org]
Sent: vrijdag 8 september 2017
Sorry to cut in your thread.
> Have you disabled te FLUSH command for the Samsung ones?
We have a test cluster currently only with spinners pool, but we have
SM863 available to create the ssd pool. Is there something specific that
needs to be done for the SM863?
-Original
If now 12.2.0 is released, how and who should be approached for applying
patches for collectd?
Aug 30 10:40:42 c01 collectd: ceph plugin: JSON handler failed with
status -1.
Aug 30 10:40:42 c01 collectd: ceph plugin:
cconn_handle_event(name=osd.8,i=4,st=4): error 1
Aug 30 10:40:42 c01
I have some osd with these permissions, and without mgr. What are the
correct ones to have for luminous?
osd.0
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.14
caps: [mon] allow profile osd
caps: [osd] allow *
, allow rw path=/nfs
caps: [mon] allow r
caps: [osd] allow rwx pool=fs_meta,allow rwx pool=fs_data
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 23:48
To: ceph-users
Subject: [ceph-users] Centos7, luminous, cephfs, .snaps
Where can I find some examples on creating
What would be the best way to get an overview of all client connetions.
Something similar to the output of rbd lock list
cluster:
1 clients failing to respond to capability release
1 MDSs report slow requests
ceph daemon mds.a dump_ops_in_flight
{
"ops": [
al Message-
From: Jean-Charles Lopez [mailto:jelo...@redhat.com]
Sent: woensdag 13 september 2017 1:06
To: Marc Roos
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Rgw install manual install luminous
Hi,
see comment in line
Regards
JC
> On Sep 12, 2017, at 13:31, Marc Roos <m.r...@f
Am I the only one having these JSON issues with collectd, did I do
something wrong in configuration/upgrade?
Sep 13 15:44:15 c01 collectd: ceph plugin: ds
Bluestore.kvFlushLat.avgtime was not properly initialized.
Sep 13 15:44:15 c01 collectd: ceph plugin: JSON handler failed with
status -1.
I have been trying to setup the rados gateway (without deploy), but I am
missing some commands to enable the service I guess? How do I populate
the /var/lib/ceph/radosgw/ceph-gw1. I didn’t see any command like the
ceph-mon.
service ceph-radosgw@gw1 start
Gives:
2017-09-12 22:26:06.390523
files at the end.
Ps. Is there some index of these slides? I have problems browsing back
to a specific one constantly.
-Original Message-
From: Danny Al-Gaaf [mailto:danny.al-g...@bisect.de]
Sent: maandag 25 september 2017 9:37
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] librmb
Maybe this will get you started with the permissions for only this fs
path /smb
sudo ceph auth get-or-create client.cephfs.smb mon 'allow r' mds 'allow
r, allow rw path=/smb' osd 'allow rwx pool=fs_meta,allow rwx
pool=fs_data'
-Original Message-
From: Yoann Moulin
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep
Is this useful for someone?
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017]
I have on luminous 12.2.1 on a osd node nfs-ganesha 2.5.2 (from ceph
download) running. And when I rsync on a vm that has the nfs mounted, I
get stalls.
I thought it was related to the amount of files of rsyncing the centos7
distro. But when I tried to rsync just one file it also stalled.
>From the looks of it, to bad the efforts could not be
combined/coordinated, that seems to be an issue with many open source
initiatives.
-Original Message-
From: mj [mailto:li...@merit.unu.edu]
Sent: zondag 24 september 2017 16:37
To: ceph-users@lists.ceph.com
Subject: Re:
ceph fs authorize cephfs client.bla /bla rw
Will generate a user with these permissions
[client.bla]
caps mds = "allow rw path=/bla"
caps mon = "allow r"
caps osd = "allow rw pool=fs_data"
With those permissions I cannot mount, I get a permission denied, until
I
I had some issues with the iscsi software starting to early, maybe this
can give you some ideas.
systemctl show target.service -p After
mkdir /etc/systemd/system/target.service.d
cat << 'EOF' > /etc/systemd/system/target.service.d/10-waitforrbd.conf
[Unit]
After=systemd-journald.socket
nfs-ganesha-2.5.2-.el7.x86_64.rpm
^
Is this correct?
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 11:40
To: amaredia; wooertim
Cc: ceph-users
Subject: Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7
Ali, Very very nice! I was creating
:29
To: TYLin
Cc: Marc Roos; ceph-us...@ceph.com
Subject: Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7
Marc,
These rpms (and debs) are built with the latest ganesha 2.5 stable
release and the latest luminous release on download.ceph.com:
http://download.ceph.com/nfs-ganesha/
I just
I had this also once. If you update all nodes and then systemctl restart
'ceph-osd@*' on all nodes, you should be fine. But first the monitors of
course
-Original Message-
From: Thomas Gebhardt [mailto:gebha...@hrz.uni-marburg.de]
Sent: woensdag 30 augustus 2017 14:10
To:
Should these messages not be gone in 12.2.0?
2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
Where can I find some examples on creating a snapshot on a directory.
Can I just do mkdir .snaps? I tried with stock kernel and a 4.12.9-1
http://docs.ceph.com/docs/luminous/dev/cephfs-snapshots/
___
ceph-users mailing list
Did you check this?
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39886.html
-Original Message-
From: Daniel Carrasco [mailto:d.carra...@i2tic.com]
Sent: dinsdag 17 oktober 2017 17:49
To: ceph-us...@ceph.com
Subject: [ceph-users] OSD are marked as down after jewel ->
What about not using deploy?
-Original Message-
From: Sean Sullivan [mailto:lookcr...@gmail.com]
Sent: donderdag 19 oktober 2017 2:28
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Luminous can't seem to provision more than 32 OSDs
per server
I am trying to install Ceph
1. I don’t think an osd should 'crash' in such situation.
2. How else should I 'rados put' an 8GB file?
-Original Message-
From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
Sent: maandag 13 november 2017 0:12
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users
:
2017-11-10 20:39:31.296101 7f840ad45e40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
Or is that a leftover warning message from an old client?
Kind regards,
Caspar
2017-11-10 21:27 GMT+01:00 Marc Roos <m.r...@f1-outsourcing.eu>:
rom your
ceph.conf and see if that solves it.
Caspar
2017-11-12 15:56 GMT+01:00 Marc Roos <m.r...@f1-outsourcing.eu>:
[@c03 ~]# ceph osd status
2017-11-12 15:54:13.164823 7f478a6ad700 -1 WARNING: the following
dangerous and experimental features are enabled: bl
I was wondering if there are any statistics available that show the
performance increase of doing such things?
-Original Message-
From: German Anders [mailto:gand...@despegar.com]
Sent: dinsdag 28 november 2017 19:34
To: Luis Periquito
Cc: ceph-users
Subject: Re: [ceph-users]
If I am not mistaken, the whole idea with the 3 replica's is dat you
have enough copies to recover from a failed osd. In my tests this seems
to go fine automatically. Are you doing something that is not adviced?
-Original Message-
From: Gonzalo Aguilar Delgado
osd's are crashing when putting a (8GB) file in a erasure coded pool,
just before finishing. The same osd's are used for replicated pools
rbd/cephfs, and seem to do fine. Did I made some error is this a bug?
Looks similar to
https://www.spinics.net/lists/ceph-devel/msg38685.html
niversity
P / SMS / WA : 081 322 070719
E : iswaradr...@gmail.com / iswaradr...@live.com
On Sat, Nov 4, 2017 at 6:11 PM, Marc Roos <m.r...@f1-outsourcing.eu>
wrote:
What is the new syntax for "ceph osd status" for luminous?
-Original Messag
Keep in mind also if you want to have fail over in the future. We were
running a 2nd server and were replicating via DRBD the raid arrays.
Expanding this storage is quite hastle, compared to just adding a few
osd's.
-Original Message-
From: Oscar Segarra
Very very nice, Thanks! Is there a heavy penalty to pay for enabling
this?
-Original Message-
From: John Spray [mailto:jsp...@redhat.com]
Sent: maandag 13 november 2017 11:48
To: Marc Roos
Cc: iswaradrmwn; ceph-users
Subject: Re: [ceph-users] No ops on some OSD
On Sun, Nov 12
What is the new syntax for "ceph osd status" for luminous?
-Original Message-
From: I Gede Iswara Darmawan [mailto:iswaradr...@gmail.com]
Sent: donderdag 2 november 2017 6:19
To: ceph-users@lists.ceph.com
Subject: [ceph-users] No ops on some OSD
Hello,
I want to ask about my
How/where can I see how eg. 'profile rbd' is defined?
As in
[client.rbd.client1]
key = xxx==
caps mon = "profile rbd"
caps osd = "profile rbd pool=rbd"
___
ceph-users mailing list
ceph-users@lists.ceph.com
What would be the correct way to convert the xml file rbdmapped images
to librbd?
I had this:
And for librbd this:
But this will give me a
I would like store objects with
rados -p ec32 put test2G.img test2G.img
error putting ec32/test2G.img: (27) File too large
Changing the pool application from custom to rgw did not help
___
ceph-users mailing list
ceph-users@lists.ceph.com
I added an erasure k=3,m=2 coded pool on a 3 node test cluster and am
getting these errors.
pg 48.0 is stuck undersized for 23867.00, current state
active+undersized+degraded, last acting [9,13,2147483647,7,2147483647]
pg 48.1 is stuck undersized for 27479.944212, current state
Message-
From: Kevin Hrpcek [mailto:kevin.hrp...@ssec.wisc.edu]
Sent: donderdag 9 november 2017 21:09
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Pool shard/stripe settings for file too large
files?
Marc,
If you're running luminous you may need to increase osd_max_object_size
Do you know of a rados client that uses this? Maybe a simple 'mount' so
I can cp the files on it?
-Original Message-
From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
Sent: donderdag 9 november 2017 22:01
To: Kevin Hrpcek
Cc: Marc Roos; ceph-users
Subject: Re: [ceph
I, in test environment, centos7, on a luminous osd node, with binaries
from
download.ceph.com::ceph/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
Having these:
Nov 6 17:41:34 c01 kernel: ganesha.nfsd[31113]: segfault at 0 ip
7fa80a151a43 sp 7fa755ffa2f0 error 4 in
Can anyone advice on a erasure pool config to store
- files between 500MB and 8GB, total 8TB
- just for archiving, not much reading (few files a week)
- hdd pool
- now 3 node cluster (4th coming)
- would like to save on storage space
I was thinking of a profile with jerasure k=3 m=2, but
Total size: 51 M
Is this ok [y/d/N]: y
Downloading packages:
Package ceph-common-12.2.2-0.el7.x86_64.rpm is not signed
-Original Message-
From: Rafał Wądołowski [mailto:rwadolow...@cloudferro.com]
Sent: maandag 4 december 2017 14:18
To: ceph-users@lists.ceph.com
Subject:
Hi Giang,
Can I ask you if you used the elrepo kernels? Because I tried these, but
they are not booting because of I think the mpt2sas mpt3sas drivers.
Regards,
Marc
-Original Message-
From: GiangCoi Mr [mailto:ltrgian...@gmail.com]
Sent: woensdag 25 oktober 2017 16:11
To:
rom: GiangCoi Mr [mailto:ltrgian...@gmail.com]
Sent: woensdag 25 oktober 2017 17:08
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] iSCSI gateway for ceph
Yes, I used elerepo to upgrade kernel, I can boot and show it, kernel
4.x. What is the problem?
Sent from my iPhone
> On Oct 25, 2017, at
Is it possible to add a longer description with the created snapshot
(other than using name)?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
mds: cephfs-1/1/1 up {0=a=up:rejoin}, 1 up:standby
2018-05-07 11:37:29.006507 7ff32bc69700 1 heartbeat_map is_healthy
'MDSRank' had timed out after 15
2018-05-07 11:37:29.006515 7ff32bc69700 1 mds.beacon.a _send skipping
beacon, heartbeat map not healthy
2018-05-07 11:37:32.943408
And logs flooded with such messages.
May 7 10:47:48 c01 ceph-osd: 2018-05-07 10:47:48.201963 7f7d94afc700 -1
osd.7 19394 heartbeat_check: no reply from 192.168.10.112:6804 osd.10
ever on either front or back, first ping sent 2018
-05-07 10:47:20.970982 (cutoff 2018-05-07 10:47:28.201961)
May
I have a mds.a and a mds.c if I stop mds.a it looks like osds are going
down again. If I keep mds in rejoin osd's stay up.
-Original Message-
From: Marc Roos
Sent: maandag 7 mei 2018 6:51
To: ceph-users
Subject: [ceph-users] Luminous update 12.2.4 -> 12.2.5 mds 'stuck' in
rejoin
This 'juggle keys' is a bit cryptic to me. If I create a subuser it
becomes a swift user not? So how can that have access to the s3 or be
used in a s3 client. I have to put in the client the access and secret
key, in the subuser I only have a secret key.
Is this multi tentant basically only
Should I then start increasing the mds_cache_memory_limit?
PID=3909094 - Swap used: 8292 - (ceph-mgr )
PID=3899780 - Swap used: 13948 - (ceph-osd )
PID=3899840 - Swap used: 15468 - (ceph-osd )
PID=3899843 - Swap used: 19396 - (ceph-osd )
PID=3899316 - Swap used: 22452 - (ceph-mon )
PID=1159 -
What would be the best way to implement a situation where:
I would like to archive some files in lets say an archive bucket and use
a read/write account for putting the files. Then give other users only
read access to this bucket so they can download something if necessary?
All using some
Why do I have sometimes md5 hashes with some postfix of "-xxx" like:
[@]$ s3cmd ls "s3://test2/" --list-md5
2018-05-05 13:29 12430875 d3ccb6a9d2d3bc85dbe9de519c2be8e1-791
s3://test2/test.img
___
ceph-users mailing list
ceph-users@lists.ceph.com
Looks nice
- I rather have some dashboards with collectd/influxdb.
- Take into account bigger tv/screens eg 65" uhd. I am putting more
stats on them than viewing them locally in a webbrowser.
- What is to be considered most important to have on your ceph
dashboard? As a newbie I find it
Thanks Paul for the explanation, sounds very logical now.
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: woensdag 25 april 2018 20:28
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] ceph osd reweight (doing -1 or actually
-0.0001)
Hi
1 - 100 of 495 matches
Mail list logo