Hi,
Centos 7 qemu out of the box does not support rbd.
I had to build package with rbd support manually with %define rhev 1
in qemu-kvm spec file. I also had to salvage some files from src.rpm
file which were missing from centos git.
On 2014.10.04 11:31, Ignazio Cassano wrote:
Hi all,
I'd
hi,
Ubuntu 14.04 currently ships ceph 0.79. After firefly release ubuntu
maintainer will update ceph version in ubuntu's repos.
On 2014.04.30 07:08, Kenneth wrote:
Latest Ceph release is Firefly v0.80 right? Or is it still in beta?
And Ubuntu is on 14.04.
Will I be able to install ceph 0.80
On 2014.05.07 20:28, *sm1Ly wrote:
I got deploy my cluster with this commans.
mkdir clustername
cd clustername
ceph-deploy install mon1 mon2 mon3 mds1 mds2 mds3 osd200
ceph-deploy new mon1 mon2 mon3
ceph-deploy mon create mon1 mon2 mon3
ceph-deploy gatherkeys mon1 mon2
hi,
I am not sure about your link, but I use: http://ceph.com/rpm-firefly/
reference: http://ceph.com/docs/master/install/get-packages/
On 2014.05.08 19:32, Shawn Edwards wrote:
The links on the download page for 0.80 still shows 0.72 bins. Did
the 0.80 binaries get deployed yet?
I'm
hi,
trusty will include ceph in usual repos. I am tracking
http://packages.ubuntu.com/trusty/ceph and
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1278466 for release
On 2014.05.08 23:45, Michael wrote:
Hi,
Have these been missed or have they been held back for a specific reason?
On 2014.05.22 19:55, Gregory Farnum wrote:
On Thu, May 22, 2014 at 4:09 AM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
- Message from Gregory Farnum g...@inktank.com -
Date: Wed, 21 May 2014 15:46:17 -0700
From: Gregory Farnum g...@inktank.com
Subject: Re:
On 2014.06.23 10:01, Udo Lembke wrote:
Hi,
AFAIK should an ceph osd down osd.29 marked osd.29 as down.
But what is to do if this don't happens?
I got following:
root@ceph-02:~# ceph osd down osd.29
marked down osd.29.
root@ceph-02:~# ceph osd tree
2014-06-23 08:51:00.588042 7f15747f5700
well, at least for me it is live-updateable (0.80.1). It may be that
during recovery OSDs are currently backfilling other pgs, so stats are
not updated (because pg were not tried to backfill after setting change).
On 2014.06.30 18:31, Gregory Farnum wrote:
It looks like that value isn't
On 2014.03.05 13:23, Georgios Dimitrakakis wrote:
Actually there are two monitors (my bad in the previous e-mail).
One at the MASTER and one at the CLIENT.
The monitor in CLIENT is failing with the following
2014-03-05 13:08:38.821135 7f76ba82b700 1
mon.client1@0(leader).paxos(paxos active
On 12/23/14 12:57, René Gallati wrote:
Hello,
so I upgraded my cluster from 89 to 90 and now I get:
~# ceph health
HEALTH_WARN too many PGs per OSD (864 max 300)
That is a new one. I had too few but never too many. Is this a problem
that needs attention, or ignorable? Or is there even a
I think that there will be no big scrub, as there are limits of maximum
scrubs at a time.
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
If we take osd max scrubs which is 1 by default, then you will not get
more than 1 scrub per OSD.
I couldn't quickly find if
, Henrik Korkuc li...@kirneh.eu
mailto:li...@kirneh.eu wrote:
I think that there will be no big scrub, as there are limits of
maximum scrubs at a time.
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
If we take osd max scrubs which is 1 by default
On 3/10/15 11:06, Mateusz Skała wrote:
Hi,
In my cluster is something wrong with free space. In cluster with
10OSD (5*1TB + 5*2TB) ‘ceph –s’ shows:
11425 GB used, 2485 GB / 13910 GB avail
But I have only 2 rbd disks in one pool (‘rbd’):
rados df
pool name category KB objects
Hello,
can anyone recommend script/program to periodically synchronize RGW
buckets with Amazon's S3?
--
Sincerely
Henrik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 3/31/15 11:27, Kai KH Huang wrote:
1) But Ceph says ...You can run a cluster with 1 monitor.
(http://ceph.com/docs/master/rados/operations/add-or-rm-mons/), I assume it should work.
And brain split is not my current concern
Point is that you must have majority of monitors up.
* In one
check firewall rules, network connectivity.
Can all nodes and clients reach each other? Can you telnet to OSD ports
(note that multiple OSDs may listen on differenct ports)?
On 3/31/15 8:44, Tyler Bishop wrote:
I have this ceph node that will correctly recover into my ceph pool
and
I didn't have a need for this kind of setup, but as you already need
http server (apache, nginx, etc) to proxy requests to rgw, you could
setup all domains on it and when forwarding use only one.
On 2/21/15 1:58, Shinji Nakamoto wrote:
We have multiple interfaces on our Rados gateway node,
Hey,
as Debian Jessie is already released for some time, I'd like to ask is
there any plans to build newer Ceph packages for it?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey,
I am having problems too - non-ceph dependencies cannot be satified
(newer package versions are required than exists in distro):
# aptitude install ceph
The following NEW packages will be installed:
libboost-program-options1.55.0{a} libboost-system1.55.0{a}
libboost-thread1.55.0{a}
On 6/11/15 12:21, Jan Schermer wrote:
Hi,
hoping someone can point me in the right direction.
Some of my OSDs have a larger CPU usage (and ops latencies) than others. If I
restart the OSD everything runs nicely for some time, then it creeps up.
1) most of my OSDs have ~40% CPU (core) usage
can you paste dmesg and system logs? I am using 3 node OCFS2 with RBD
and had no problems.
On 15-10-23 08:40, gjprabu wrote:
Hi Frederic,
Can you give me some solution, we are spending more time to
solve this issue.
Regards
Prabu
On Thu, 15 Oct 2015 17:14:13 +0530
On 15-09-17 18:59, wikison wrote:
Is there any detailed manual deployment document? I downloaded the
source and built ceph, then installed ceph on 7 computers. I used
three as monitors and four as OSD. I followed the official document on
ceph.com. But it didn't work and it seemed to be
On 15-11-20 17:14, Kenneth Waegeman wrote:
<...>
* systemctl start ceph.target does not start my osds.., I have to
start them all with systemctl start ceph-osd@...
* systemctl restart ceph.target restart the running osds, but not the
osds that are not yet running.
* systemctl stop ceph.target
On 16-01-11 04:10, Rafael Lopez wrote:
Thanks for the replies guys.
@Steve, even when you remove due to failing, have you noticed that the
cluster rebalances twice using the documented steps? You may not if
you don't wait for the initial recovery after 'ceph osd out'. If you
do 'ceph osd
On 16-05-18 14:23, Sage Weil wrote:
Currently, after an OSD has been down for 5 minutes, we mark the OSD
"out", whic redistributes the data to other OSDs in the cluster. If the
OSD comes back up, it marks the OSD back in (with the same reweight value,
usually 1.0).
The good thing about marking
mons generate these bootstrep keys. You can find them in
/var/lib/ceph/bootstrap-*/ceph.keyring
on pre-infernalis there were created automagically (I guess by init).
Infernalis and jewel have ceph-create-keys@.service systemd job for that.
Just place that dir with file in same location on
On 16-05-02 02:14, Stuart Longland wrote:
On 02/05/16 00:32, Henrik Korkuc wrote:
mons generate these bootstrep keys. You can find them in
/var/lib/ceph/bootstrap-*/ceph.keyring
on pre-infernalis there were created automagically (I guess by init).
Infernalis and jewel have ceph-create-keys
Hey,
I am wondering how people are monitoring/graphing slow requests ("oldest
blocked for > xxx secs") on their clusters? I didn't find related
counters to graph. So it looks like mon logs should be parsed for that
info? Maybe someone has other ideas?
On 16-07-25 10:55, 朱 彤 wrote:
Hi all,
I m looking for a method to transfer ceph cluster.
Now the cluster is located in network1 that has hosts A, B, C...
And the target is to transfer it to network2 that has hosts a,b,c...
What I can think of, is adding hosts a, b, c into the current
You can do it with ceph-disk prepare --bluestore /dev/sdX
Just keep in mind that it is very unstable and will result in corruption
or other issues.
On 16-07-29 04:36, m13913886...@yahoo.com wrote:
hello cepher , I use ceph-10.2.2 source deploy a cluster.
Since I am the source deployment
Hey,
I noticed that rgw lifecycle feature got back to master almost a month
ago. Is there any chance that it will be backported to Jewel? If not,
are you aware of any incompatibilities with jewel code what would
prevent/complicate custom build with that code?
On 16-07-19 11:44, M Ranga Swami Reddy wrote:
Hi,
Using ceph cluster with 100+ OSDs and cluster is filled with 60% data.
One of the OSD is 95% full.
If an OSD is 95% full, is it impact the any storage operation? Is this
impacts on VM/Instance?
Yes, one OSD will impact whole cluster. It will
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition size: 976754385 sectors (3.6 TiB)
On 16-07-18 10:53, Henrik Korkuc wrote:
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition
On 16-07-18 11:11, Henrik Korkuc wrote:
On 16-07-18 10:53, Henrik Korkuc wrote:
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last
On 16-07-18 13:37, Eduard Ahmatgareev wrote:
Hi guys.
Could you help me with some small trouble?
We have new installation ceph version 10.2.2 and we have some
interesting trouble with auto mounting osd after reboot storage
node. We forced to mount osd manual after reboot, and osd work fine.
On 16-07-22 13:33, Andrei Mikhailovsky wrote:
Hello
We are planning to make changes to our IT infrastructure and as a
result the fqdn and IPs of the ceph cluster will change. Could someone
suggest the best way of dealing with this to make sure we have a
minimal ceph downtime?
Can old and
I am not sure about "incomplete" part out of my head, but you can try
setting min_size to 1 for pools toreactivate some PG, if they are
down/inactive due to missing replicas.
On 17-01-31 10:24, José M. Martín wrote:
# ceph -s
cluster 29a91870-2ed2-40dc-969e-07b22f37928b
health
On 17-02-01 10:55, Michael Hartz wrote:
I am running ceph as part of a Proxmox Virtualization cluster, which is doing
great.
However for monitoring purpose I would like to periodically check with 'ceph
health' as a non-root user.
This fails with the following message:
su -c 'ceph health' -s
just to add to what Pawel said: /etc/logrotate.d/ceph.logrotate
On 17-01-26 09:21, Torsten Casselt wrote:
Hi,
that makes sense. Thanks for the fast answer!
On 26.01.2017 08:04, Paweł Sadowski wrote:
Hi,
6:25 points to daily cron job, it's probably logrotate trying to force
ceph to reopen
On 17-02-09 05:09, Sage Weil wrote:
Hello, ceph operators...
Several times in the past we've had to do some ondisk format conversion
during upgrade which mean that the first time the ceph-osd daemon started
after upgrade it had to spend a few minutes fixing up it's ondisk files.
We haven't had
Hey,
I stumbled on the problem that RGW upload results in
SignatureDoesNotMatch error when I try uploading file with '@' or some
other special characters.
Can someone confirm same issues? I didn't manage to find bugreports about it
___
ceph-users
Hey,
it is normal for reweight value to be 1. You (with "ceph osd reweight
OSDNUM newweight") or "ceph osd reweight-by-utilization" can decrease it
to move some pgs out of that OSD.
Thing that usually differs and depends on disk size is "weight"
On 16-08-31 22:06, 한승진 wrote:
Hi Cephers!
On 16-02-27 06:09, Yehuda Sadeh-Weinraub wrote:
On Wed, Feb 24, 2016 at 5:48 PM, Ben Hines wrote:
Any idea what is going on here? I get these intermittently, especially with
very large file.
The client is doing RANGE requests on this >51 GB file, incrementally
fetching
On 16-09-05 14:36, Henrik Korkuc wrote:
On 16-02-27 06:09, Yehuda Sadeh-Weinraub wrote:
On Wed, Feb 24, 2016 at 5:48 PM, Ben Hines <bhi...@gmail.com> wrote:
Any idea what is going on here? I get these intermittently,
especially with
very large file.
The client is doing RANGE re
On 16-09-14 18:21, Andreas Gerstmayr wrote:
Hello,
I'm currently performing some benchmark tests with our Ceph storage
cluster and trying to find the bottleneck in our system.
I'm writing a random 30GB file with the following command:
$ time fio --name=job1 --rw=write --blocksize=1MB
On 16-09-13 11:13, Ronny Aasen wrote:
I suspect this must be a difficult question since there have been no
replies on irc or mailinglist.
assuming it's impossible to get these osd's running again.
Is there a way to recover objects from the disks. ? they are mounted
and data is readable. I
as far as I noticed after doing zone/region changes you need to
"radosgw-admin period update --commit" for them to take an effect
On 16-09-14 11:22, Ansgar Jazdzewski wrote:
Hi,
i curently setup my new testcluster (Jewel) and found out the index
sharding configuration had changed?
i did so
looks like mine problem is little different. I am not using v4, and
object names which fail to you works for me
On 16-08-25 11:52, jan hugo prins wrote:
Could this have something to do with: http://tracker.ceph.com/issues/17076
Jan Hugo Prins
On 08/25/2016 10:34 AM, Henrik Korkuc wrote
Hey,
10.2.3 is tagged in jewel branch for more than 5 days already, but there
were no announcement for that yet. Is there any reasons for that?
Packages seems to be present too
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hey,
trying to activate forward mode for cache pool results in "Error EPERM:
'forward' is not a well-supported cache mode and may corrupt your data.
pass --yes-i-really-mean-it to force."
Change for this message was introduced few months ago and I didn't
manage to find reason for that?
I filled http://tracker.ceph.com/issues/17858 recently, I am seeing this
problem on 10.2.3 ceph-fuse, but maybe kernel client is affected too.
It is easy to replicate, just do deep "mkdir -p", e.g. "mkdir -p
1/2/3/4/5/6/7/8/9/0/1/2/3/4/5/6/7/8/9"
On 16-11-11 10:46, Dan van der Ster wrote:
also upgraded the cluster to 10.2.3 from
10.2.2.
Let's hope I only hit a bug and that this bug is now fixed, on the other
hand, I think I also saw the issue with a 10.2.3 node, but I'm not sure.
Jan Hugo
On 10/31/2016 11:41 PM, Henrik Korkuc wrote:
this is normal. You should expect that your
this is normal. You should expect that your disks may get reordered
after reboot. I am not sure about your setup details, but in 10.2.3 udev
should be able to activate your OSDs no matter the naming (there were
some bugs in previous 10.2.x releases)
On 16-10-31 18:32, jan hugo prins wrote:
On 16-10-11 14:30, John Spray wrote:
On Tue, Oct 11, 2016 at 12:00 PM, Henrik Korkuc <li...@kirneh.eu> wrote:
Hey,
After a bright idea to pause 10.2.2 Ceph cluster for a minute to see if it
will speed up backfill I managed to corrupt my MDS journal (should it happen
after cluster pause/u
Hey,
After a bright idea to pause 10.2.2 Ceph cluster for a minute to see if
it will speed up backfill I managed to corrupt my MDS journal (should it
happen after cluster pause/unpause, or is it some sort of a bug?). I had
"Overall journal integrity: DAMAGED", etc
I was following
On 16-10-13 22:46, Chris Murray wrote:
On 13/10/2016 11:49, Henrik Korkuc wrote:
Is apt/dpkg doing something now? Is problem repeatable, e.g. by
killing upgrade and starting again. Are there any stuck systemctl
processes?
I had no problems upgrading 10.2.x clusters to 10.2.3
On 16-10-13 13
from status page it seems that Ceph didn't like networking problems. May
we find out some details what happened? Underprovisioned servers (RAM
upgrades were in there too)? Too much load on disks? Something else?
This situation may be not pleasant but I feel that others can learn from
it to
Is apt/dpkg doing something now? Is problem repeatable, e.g. by killing
upgrade and starting again. Are there any stuck systemctl processes?
I had no problems upgrading 10.2.x clusters to 10.2.3
On 16-10-13 13:41, Chris Murray wrote:
On 22/09/2016 15:29, Chris Murray wrote:
Hi all,
Might
On 17-01-02 06:24, Lindsay Mathieson wrote:
Hi all, familiar with ceph but out of touch on cephfs specifics, so
some quick questions:
- cephfs requires a MDS for its metadata (file/dir structures,
attributes etc?
yes
- Its Active/Passive, i.e only one MDS can be active at a time, with a
On 17-01-04 03:16, Gregory Farnum wrote:
On Fri, Dec 23, 2016 at 12:04 AM, Henrik Korkuc <li...@kirneh.eu> wrote:
Hello,
I wondered if Ceph can emit stats (via perf counters, statsd or in some
other way) IO and bandwidth stats per Ceph user? I was unable to find such
stats. I know that
Hello,
I wondered if Ceph can emit stats (via perf counters, statsd or in some
other way) IO and bandwidth stats per Ceph user? I was unable to find
such stats. I know that we can get at least some of these stats from
RGW, but I'd like to have something like that for RBD and CephFS.
Example
On 16-12-23 12:43, Stéphane Klein wrote:
2016-12-23 11:35 GMT+01:00 Wido den Hollander >:
> Op 23 december 2016 om 10:31 schreef Stéphane Klein
>:
>
>
> 2016-12-22 18:09
On 16-12-23 22:14, Kent Borg wrote:
Hello, a newbie here!
Doing some playing with Python and librados, and it is mostly easy to
use, but I am confused about atomic operations. The documentation
isn't clear to me, and Google isn't giving me obvious answers either...
I would like to do some
On 16-12-22 13:20, Stéphane Klein wrote:
2016-12-22 12:18 GMT+01:00 Henrik Korkuc <li...@kirneh.eu
<mailto:li...@kirneh.eu>>:
On 16-12-22 13:12, Stéphane Klein wrote:
HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs
undersized; recovery 24/70 objects degr
On 16-12-22 13:26, Stéphane Klein wrote:
Hi,
I have:
* 3 mon
* 3 osd
When I shutdown one osd, I work great:
cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
health HEALTH_WARN
43 pgs degraded
43 pgs stuck unclean
43 pgs undersized
recovery
On 16-12-22 13:12, Stéphane Klein wrote:
HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized;
recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 <
min 30); 1/3 in osds are down;
it says 1/3 OSDs are down. By default Ceph pools are setup with size 3.
If your
mon.* and osd.* sections are not mandatory in config. So unless you want
to set something per daemon, you can skip them completely.
On 17-04-21 19:07, Fabian wrote:
Hi Everyone,
I play a bit around with ceph on a test cluster with 3 servers (each MON
and OSD at the same time).
I use some self
On 17-03-14 00:08, John Spray wrote:
On Mon, Mar 13, 2017 at 8:15 PM, Andras Pataki
wrote:
Dear Cephers,
We're using the ceph file system with the fuse client, and lately some of
our processes are getting stuck seemingly waiting for fuse operations. At
the same
On 17-03-08 15:39, Kevin Olbrich wrote:
Hi!
Currently I have a cluster with 6 OSDs (5 hosts, 7TB RAID6 each).
We want to shut down the cluster but it holds some semi-productive VMs
we might or might not need in the future.
To keep them, we would like to shrink our cluster from 6 to 2 OSDs (we
On 17-03-03 12:30, Matteo Dacrema wrote:
Hi All,
I’ve a production cluster made of 8 nodes, 166 OSDs and 4 Journal SSD
every 5 OSDs with replica 2 for a total RAW space of 150 TB.
I’ve few question about it:
* It’s critical to have replica 2? Why?
Replica size 3 is highly recommended. I
Hello,
I have use case for billions of small files (~1KB) on CephFS and as to
my experience having billions of objects in a pool is not very good idea
(ops slow down, large memory usage, etc) I decided to test CephFS
inline_data. After activating this feature and starting copy process I
On 17-07-10 08:29, Christian Balzer wrote:
Hello,
so this morning I was greeted with the availability of 10.2.8 for both
Jessie and Stretch (much appreciated), but w/o any announcement here or
updated release notes on the website, etc.
Any reason other "Friday" (US time) for this?
Christian
On 17-06-23 17:13, Abhishek L wrote:
* You can now *optimize CRUSH weights* can now be optimized to
maintain a *near-perfect distribution of data* across OSDs.
It would be great to get some information on how to use this feature.
___
Hello,
I have RGW multisite setup on Jewel and I would like to turn off data
replication there so that only metadata (users, created buckets, etc)
would be synced but not the data.
Is it possible to make such setup?
___
ceph-users mailing list
On 17-05-15 14:49, John Spray wrote:
On Mon, May 15, 2017 at 1:36 PM, Henrik Korkuc <li...@kirneh.eu> wrote:
On 17-05-15 13:40, John Spray wrote:
On Mon, May 15, 2017 at 10:40 AM, Ranjan Ghosh <gh...@pw6.de> wrote:
Hi all,
When I run "ceph daemon mds. session ls" I alw
On Tue, 25 Apr 2017 16:31:44 +0530 *Henrik Korkuc
<li...@kirneh.eu>* wrote
On 17-04-25 13:43, gjprabu wrote:
Hi Team,
I am running cephfs setup with single MDS . Suppose
in single MDS setup if the MDS goes down what will happen for
On 17-04-24 19:38, Ashley Merrick wrote:
Hey,
Quick question hopefully have tried a few Google searches but noting concrete.
I am running KVM VM's using KRBD, if I add and remove CEPH mon's are the
running VM's updated with this information. Or do I need to reboot the VM's for
them to be
On 17-04-25 13:43, gjprabu wrote:
Hi Team,
I am running cephfs setup with single MDS . Suppose in
single MDS setup if the MDS goes down what will happen for data. Is it
advisable to run multiple MDS.
MDS data is in Ceph cluster itself. After MDS failure you can start
another MDS
On 17-08-16 19:40, John Spray wrote:
On Wed, Aug 16, 2017 at 3:27 PM, Henrik Korkuc <li...@kirneh.eu> wrote:
Hello,
I have use case for billions of small files (~1KB) on CephFS and as to my
experience having billions of objects in a pool is not very good idea (ops
slow down, large memory
On 17-05-15 13:40, John Spray wrote:
On Mon, May 15, 2017 at 10:40 AM, Ranjan Ghosh wrote:
Hi all,
When I run "ceph daemon mds. session ls" I always get a fairly large
number for num_caps (200.000). Is this normal? I thought caps are sth. like
open/locked files meaning a client
On 17-09-20 08:06, nokia ceph wrote:
Hello,
Env:- RHEL 7.2 , 3.10.0-327.el7.x86_64 , EC 4+1 , bluestore
We are writing to ceph via librados C API . Testing with rados no
issues.
The same we tested with Jewel/kraken without any issue. Need your view
how to debug this issue?
maybe similar
On 17-10-06 11:25, ulem...@polarzone.de wrote:
Hi,
again is an update available without release-note...
http://ceph.com/releases/v10-2-10-jewel-released/ isn't found.
No announcement on the mailing list (perhaps i miss something).
While I do not see v10.2.10 tag in repo yet, it looks like
On 17-09-06 07:33, Ashley Merrick wrote:
Hello,
Have recently upgraded a cluster to Luminous (Running Proxmox), at the
same time I have upgraded the Compute Cluster to 5.x meaning we now
run the latest kernel version (Linux 4.10.15-1) Looking to do the
following :
ceph osd
is new enough.
Well, it looks like docs may need to be revisited as I was unable to use
kcephfs on 4.9 with luminous before downgrading tunables, not sure about
4.10.
,Ashley
*From:* Henrik Korkuc <li...@kirneh.eu>
On 17-09-06 16:24, Jean-Francois Nadeau wrote:
Hi,
On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC
pools + Bluestore.
Setup went fine but after a few bench runs several OSD are failing and
many wont even restart.
ceph osd erasure-code-profile set myprofile \
k=2\
On 17-09-06 18:23, Sage Weil wrote:
Hi everyone,
Traditionally, we have done a major named "stable" release twice a year,
and every other such release has been an "LTS" release, with fixes
backported for 1-2 years.
With kraken and luminous we missed our schedule by a lot: instead of
releasing
On 17-09-07 02:42, Deepak Naidu wrote:
Hope collective feedback helps. So here's one.
- Not a lot of people seem to run the "odd" releases (e.g., infernalis, kraken).
I think the more obvious reason companies/users wanting to use CEPH will stick
with LTS versions as it models the 3yr
On 17-09-27 14:57, Josef Zelenka wrote:
Hi,
we are currently working on a ceph solution for one of our customers.
They run a file hosting and they need to store approximately 100
million of pictures(thumbnails). Their current code works with FTP,
that they use as a storage. We thought that
Hello,
I tried creating tiering with EC pools (EC pool as a cache for another
EC pool) and end up with "Error ENOTSUP: tier pool 'ecpool' is an ec
pool, which cannot be a tier". Having overwrite support on EC pools with
direct support by RBD and CephFS it may be worth having tiering using EC
what is output of "netstat -anp | grep 7000"?
On 17-09-05 14:19, 许雪寒 wrote:
Sorry, for the miss formatting, here is the right one:
Sep 5 19:01:56 rg1-ceph7 ceph-mgr: File
"/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in
_start_http_thread
Sep 5 19:01:56
radosgw-admin key create --key-type s3 --uid user_uuid
--access-key=some_access_key --secret-key=some_secret_key
or you can instruct it to generate access/secret
On 17-11-23 01:25, Daniel Picolli Biazus wrote:
Hey Guys,
Is it possible generating two keys in one single user/uid on rados S3 ?
On 17-11-03 09:29, Andrey Klimentyev wrote:
Thanks for a swift response.
We are using 10.2.10.
They all share the same set of permissions (and one key, too). Haven't
found anything incriminating in logs, too.
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow
93 matches
Mail list logo