, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: Wido den Hollander [w...@42on.com]
Sent: 09 March 2015 13:43
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com mirror machine
On 03/09/2015 02
Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: Wido den Hollander [w...@42on.com]
Sent: 09 March 2015 12:15
To: HEWLETT, Paul (Paul)** CTR **; ceph-users
Subject: Re: [ceph-users] New eu.ceph.com
is dedicated for running eu.ceph.com, so hopefully
rsync won't fail anymore.
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On 13-03-15 07:44, Sreenath BH wrote:
When a RBD volume is deleted, does Ceph fill used 4 MB chunks with zeros?
No, it does not. It simply deletes the RADOS objects on the background.
Wido
thanks,
Sreenath
___
ceph-users mailing list
:16, Andrija Panic andrija.pa...@gmail.com
mailto:andrija.pa...@gmail.com wrote:
Thanks Wido - I will do that.
On 13 March 2015 at 09:46, Wido den Hollander w...@42on.com
mailto:w...@42on.com wrote:
On 13-03-15 09:42, Andrija Panic wrote:
Hi all
On 13-03-15 09:42, Andrija Panic wrote:
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.
Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.
Since I now
On 12-03-15 13:00, Lindsay Mathieson wrote:
On Thu, 12 Mar 2015 12:49:51 PM Vieresjoki, Juha wrote:
But there's really no point, block storage is the only viable option for
virtual machines performance-wise. With images you're dealing with multiple
filesystem layers on top of the actual
= 0.87-1precise
Thank you for your help
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
On 26-03-15 12:04, Stefan Priebe - Profihost AG wrote:
Hi Wido,
Am 26.03.2015 um 11:59 schrieb Wido den Hollander:
On 26-03-15 11:52, Stefan Priebe - Profihost AG wrote:
Hi,
in the past i rwad pretty often that it's not a good idea to run ceph
and qemu / the hypervisors on the same nodes
which might give you problems.
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
On 25-02-15 20:31, Sage Weil wrote:
Hey,
We are considering switching to civetweb (the embedded/standalone rgw web
server) as the primary supported RGW frontend instead of the current
apache + mod-fastcgi or mod-proxy-fcgi approach. Supported here means
both the primary platform the upstream
On 04/22/2015 03:20 PM, Florian Haas wrote:
On Wed, Apr 22, 2015 at 1:02 PM, Wido den Hollander w...@42on.com wrote:
On 04/22/2015 12:07 PM, Florian Haas wrote:
Hi everyone,
I don't think this has been posted to this list before, so just
writing it up so it ends up in the archives.
tl;dr
Op 22 apr. 2015 om 16:54 heeft 10 minus t10te...@gmail.com het volgende
geschreven:
Hi,
Is there a recommended way of powering down a ceph cluster and bringing it
back up ?
I have looked thru the docs and cannot find anything wrt it.
Best way would be:
- Stop all client I/O
-
On 04/22/2015 03:38 PM, Wido den Hollander wrote:
On 04/22/2015 03:20 PM, Florian Haas wrote:
On Wed, Apr 22, 2015 at 1:02 PM, Wido den Hollander w...@42on.com wrote:
On 04/22/2015 12:07 PM, Florian Haas wrote:
Hi everyone,
I don't think this has been posted to this list before, so just
I've used that a couple of times.
Wido
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20
68 20 98/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
libvirt (1.2.8-16.el7) isn't build with RBD storage
pool support, which is required by CloudStack.
Any ideas? Would be great to have. Librados and librbd are available, so
that's not the issue.
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype
On 04/29/2015 12:34 PM, Wido den Hollander wrote:
Hi,
While working with some CentOS machines I found out that Libvirt
currently is not build with RBD storage pool support.
While that support has been upstream for a very long time and enabled in
Ubuntu as well I was wondering if anybody
would like to see this enabled in both RHEL and CentOS.
Wido
Thanks Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
den Hollander
Sent: Wednesday, April 29, 2015 12:22 PM
To: ceph-users@lists.ceph.com
Subject: Re
On 01-05-15 11:42, Nick Fisk wrote:
Yeah, that’s your problem, doing a single thread rsync when you have
quite poor write latency will not be quick. SSD journals should give you
a fair performance boost, otherwise you need to coalesce the writes at
the client so that Ceph is given bigger IOs
-7-1-with-rbd-storage-pool-support/
The specfile shows that since Fedora 16 it's enabled, but for RHEL and
CentOS it is disabled by default. I would like to see that enabled as well.
On Wed, Apr 29, 2015 at 4:34 AM, Wido den Hollander w...@42on.com wrote:
Hi,
While working with some CentOS
/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users
://community.redhat.com
@scuttlemonkey || @ceph
--
Eric Mourgaya,
Respectons la planete!
Luttons contre la mediocrite!
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
is that the repo doesn't include Hammer. Is there someone who
can get that added to the mirror?
thanks very much
Paul
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den
On 16-04-15 19:31, Ferber, Dan wrote:
Thanks for working on this Patrick. I have looked for a mirror that I can
point all the ceph.com references to in
/usr/lib/python2.6/site-packages/ceph_deploy/hosts/centos/install.py. So I
can get ceph-deploy to work.
I tried eu.ceph.com but it
, Australia.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users
!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
this by fetching you crushmap from the cluster and perform
tests on it with 'crushtool'.
That will tell you what the differences will be between straw and straw2.
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
, and to monitor that value. Any idea on how to do
it ?
Thank you.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph
Thanks! I bought icecream for the whole office since the sun was shining :)
Op 1 aug. 2015 om 00:03 heeft Mark Nelson mnel...@redhat.com het volgende
geschreven:
Most folks have either probably already left or are on their way out the door
late on a friday, but I just wanted to say
On 18-08-15 12:25, Benedikt Fraunhofer wrote:
Hi Nick,
did you do anything fancy to get to ~90MB/s in the first place?
I'm stuck at ~30MB/s reading cold data. single-threaded-writes are
quite speedy, around 600MB/s.
radosgw for cold data is around the 90MB/s, which is imho limitted by
On 18-08-15 14:13, Erik McCormick wrote:
I've got a custom named cluster integrated with Openstack (Juno) and
didn't run into any hard-coded name issues that I can recall. Where are
you seeing that?
As to the name change itself, I think it's really just a label applying
to a configuration
Op 18 aug. 2015 om 18:15 heeft Jan Schermer j...@schermer.cz het volgende
geschreven:
On 18 Aug 2015, at 17:57, Björn Lässig b.laes...@pengutronix.de wrote:
On 08/18/2015 04:32 PM, Jan Schermer wrote:
Should ceph care about what scope the address is in? We don't specify it
for
On 03-08-15 22:25, Samuel Just wrote:
It seems like it's about time for us to make the jump to C++11. This
is probably going to have an impact on users of the librados C++
bindings. It seems like such users would have to recompile code using
the librados C++ libraries after upgrading the
On 04-08-15 16:39, Daniel Marks wrote:
Hi all,
I accidentally deleted a ceph pool while there was still a rados block device
mapped on a client. If I try to unmap the device with “rbd unmap the command
simply hangs. I can´t get rid of the device...
We are on:
Ubuntu 14.04
Client
On 28-07-15 16:53, Noah Mehl wrote:
When we update the following in ceph.conf:
[osd]
osd_recovery_max_active = 1
osd_max_backfills = 1
How do we make sure it takes affect? Do we have to restart all of the
ceph osd’s and mon’s?
On a client with client.admin keyring you execute:
Hi,
One of the first things I want to do as the Ceph User Committee is set
up a proper mirror system for Ceph.
Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks
Matthew!), but this isn't the way I want to see it.
I want to set up a series of localized mirrors from there you can
-users on behalf of Wido den Hollander
ceph-users-boun...@lists.ceph.com on behalf of w...@42on.com wrote:
Hi,
One of the first things I want to do as the Ceph User Committee is set
up a proper mirror system for Ceph.
Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks
Matthew
On 06-08-15 10:16, Hector Martin wrote:
We have 48 OSDs (on 12 boxes, 4T per OSD) and 4 pools:
- 3 replicated pools (3x)
- 1 RS pool (5+2, size 7)
The docs say:
http://ceph.com/docs/master/rados/operations/placement-groups/
Between 10 and 50 OSDs set pg_num to 4096
Which is what we
On 14-08-15 14:30, Marcin Przyczyna wrote:
Hello,
this is my first posting to ceph-users mailgroup
and because I am also new to this technology please
be patient with me.
A description of problem I get stuck follows:
3 Monitors are up and running, one of them
is leader, the two are
On 27-07-15 14:21, Jan Schermer wrote:
Hi!
The /cgroup/* mount point is probably a RHEL6 thing, recent distributions
seem to use /sys/fs/cgroup like in your case (maybe because of systemd?). On
RHEL 6 the mount points are configured in /etc/cgconfig.conf and /cgroup is
the default.
I
On 27-07-15 14:56, Dan van der Ster wrote:
On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander w...@42on.com wrote:
I'm testing with it on 48-core, 256GB machines with 90 OSDs each. This
is a +/- 20PB Ceph cluster and I'm trying to see how much we would
benefit from it.
Cool. How many
NUMA nodes indeed.
Wido
Jan
On 27 Jul 2015, at 15:21, Wido den Hollander w...@42on.com wrote:
On 27-07-15 14:56, Dan van der Ster wrote:
On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander w...@42on.com wrote:
I'm testing with it on 48-core, 256GB machines with 90 OSDs each
On 13-07-15 14:07, alberto ayllon wrote:
On 13-07-15 13:12, alberto ayllon wrote:
Maybe this can help to get the origin of the problem.
If I run ceph pg dump, and the end of the response i get:
What does 'ceph osd tree' tell you?
It seems there is something wrong with your
On 13-07-15 12:25, Kostis Fardelas wrote:
Hello,
it seems that new packages for firefly have been uploaded to repo.
However, I can't find any details in Ceph Release notes. There is only
one thread in ceph-devel [1], but it is not clear what this new
version is about. Is it safe to upgrade
On 13-07-15 13:12, alberto ayllon wrote:
Maybe this can help to get the origin of the problem.
If I run ceph pg dump, and the end of the response i get:
What does 'ceph osd tree' tell you?
It seems there is something wrong with your CRUSHMap.
Wido
osdstatkbusedkbavailkbhb inhb out
Hi,
On 14-07-15 11:05, Mingfai wrote:
hi,
does anyone know who is maintaining rados-java and perform release to
the Maven central? In May, there was a release to Maven central *[1],
but the release version is not based on the latest code base from:
https://github.com/ceph/rados-java
I
Hi,
Curently tracker.ceph.com doesn't have SSL enabled.
Every time I log in I'm sending my password over plain text which I'd
rather not.
Can we get SSL enabled on tracker.ceph.com?
And while we are at it, can we enable IPv6 as well? :)
--
Wido den Hollander
42on B.V.
Ceph trainer
On 07/15/2015 05:12 AM, Ken Dreyer wrote:
On 07/14/2015 04:14 PM, Wido den Hollander wrote:
Hi,
Curently tracker.ceph.com doesn't have SSL enabled.
Every time I log in I'm sending my password over plain text which I'd
rather not.
Can we get SSL enabled on tracker.ceph.com?
And while we
On 16-07-15 15:52, Simon Murray wrote:
I've spotted an issue with radosgw on Giant and I'm hoping someone can
shed any light on if it's known or been fixed. Google and #ceph weren't
particularly useful sadly.
Long story short when probing S3's admin API to get user quotas it's
returning
] ? flush_kthread_worker+0xb0/0xb0
Thanks,
Jeyaganesh.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700
and publish a new version with various fixes in it.
Bump the version number and that should then be uploaded to the repo so
that the binary matches the source.
Wido
Best regards,
Laszlo
On Tue, Jul 14, 2015 at 11:58 AM, Wido den Hollander w...@42on.com
mailto:w...@42on.com wrote:
Hi
://github.com/netskin/ceph-ruby
The last commit for desperados was in March 2013 and ceph-ruby in April
2015.
Anybody out there using Ruby bindings? If so, which one and what are the
experiences?
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
be worth looking at.
I'll give the current bindings a try btw!
Best
Corin
Am 13.07.2015 um 21:24 schrieb Wido den Hollander:
Hi,
I have an Ruby application which currently talks S3, but I want to have
the application talk native RADOS.
Now looking online I found various Ruby bindings
On 13-07-15 09:13, Abhishek Varshney wrote:
Hi Peter and Nigel,
I have tries /etc/hosts and it works perfectly fine! But I am looking
for an alternative (if any) to do away completely with hostnames and
just use IP addresses instead.
It's just that ceph-deploy wants DNS, but if you go
On 23-10-15 14:58, Jon Heese wrote:
> Hello,
>
>
>
> We have two separate networks in our Ceph cluster design:
>
>
>
> 10.197.5.0/24 - The "front end" network, "skinny pipe", all 1Gbe,
> intended to be a management or control plane network
>
> 10.174.1.0/24 - The "back end" network,
On 10/21/2015 03:30 PM, Mark Nelson wrote:
>
>
> On 10/21/2015 01:59 AM, Wido den Hollander wrote:
>> On 10/20/2015 07:44 PM, Mark Nelson wrote:
>>> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
>>>> Hi,
>>>>
>>>> In the &quo
sume only the kernel resident RBD module matters.
>
> Any thoughts or pointers appreciated.
>
> ~jpr
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph traine
On 10/21/2015 11:25 AM, Jan Schermer wrote:
>
>> On 21 Oct 2015, at 09:11, Wido den Hollander <w...@42on.com> wrote:
>>
>> On 10/20/2015 09:45 PM, Martin Millnert wrote:
>>> The thing that worries me with your next-gen design (actually your current
>&g
On 27-10-15 09:51, Björn Lässig wrote:
> Hi,
>
> after having some problems with ipv6 and download.ceph.com, i made a
> mirror (debian-hammer only) for my ipv6-only cluster.
>
I see you are from Germany, you can also sync from eu.ceph.com
> Unfortunately after the release of 0.94.5 the rsync
3 is safe.
2 replicas isn't safe, no matter how big or small the cluster is. With
disks becoming larger recovery times will grow. In that window you don't
want to run on a single replica.
> thanks.
>
>
>
> ___
> ceph
On 21-10-15 15:30, Mark Nelson wrote:
>
>
> On 10/21/2015 01:59 AM, Wido den Hollander wrote:
>> On 10/20/2015 07:44 PM, Mark Nelson wrote:
>>> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
>>>> Hi,
>>>>
>>>> In the &quo
On 26-10-15 14:29, Matteo Dacrema wrote:
> Hi Nick,
>
>
>
> I also tried to increase iodepth but nothing has changed.
>
>
>
> With iostat I noticed that the disk is fully utilized and write per
> seconds from iostat match fio output.
>
Ceph isn't fully optimized to get the maximum
On 27-10-15 11:45, Björn Lässig wrote:
> On 10/27/2015 10:22 AM, Wido den Hollander wrote:
>> On 27-10-15 09:51, Björn Lässig wrote:
>>> after having some problems with ipv6 and download.ceph.com, i made a
>>> mirror (debian-hammer only) for my ipv6-only cluster.
>&
/var/lib/ceph/mon and 46 disks with OSD data.
Wido
>
> On Mon, Nov 9, 2015 at 7:23 AM, Wido den Hollander <w...@42on.com> wrote:
>> Hi,
>>
>> Recently I got my hands on a Ceph cluster which was pretty damaged due
>> to a human error.
>>
>> I had no ce
Hi,
Recently I got my hands on a Ceph cluster which was pretty damaged due
to a human error.
I had no ceph.conf nor did I have any original Operating System data.
With just the MON/OSD data I had to rebuild the cluster by manually
re-writing the ceph.conf and installing Ceph.
The problem was,
Wido
> Please suggest
>
> Thank You in advance.
>
> - Vickey -
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Ho
On 11/13/2015 09:11 AM, Karan Singh wrote:
>
>
>> On 11 Nov 2015, at 22:49, David Clarke <dav...@catalyst.net.nz> wrote:
>>
>> On 12/11/15 09:37, Gregory Farnum wrote:
>>> On Wednesday, November 11, 2015, Wido den Hollander <w...@42on.com
>>>
On 17-11-15 11:07, Patrik Plank wrote:
> Hi,
>
>
> maybe a trivial question :-||
>
> I have to shut down all my ceph nodes.
>
> What's the best way to do this.
>
> Can I just shut down all nodes or should i
>
> first shut down the ceph process?
>
First, set the noout flag in the
On 11/10/2015 09:49 PM, Vickey Singh wrote:
> On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander <w...@42on.com> wrote:
>
>> On 11/09/2015 05:27 PM, Vickey Singh wrote:
>>> Hello Ceph Geeks
>>>
>>> Need your comments with my understanding on str
On 29-10-15 16:38, Voloshanenko Igor wrote:
> Hi Wido and all community.
>
> We catched very idiotic issue on our Cloudstack installation, which
> related to ceph and possible to java-rados lib.
>
I think you ran into this one:
https://issues.apache.org/jira/browse/CLOUDSTACK-8879
Cleaning
On 03-11-15 01:54, Voloshanenko Igor wrote:
> Thank you, Jason!
>
> Any advice, for troubleshooting
>
> I'm looking in code, and right now don;t see any bad things :(
>
Can you run the CloudStack Agent in DEBUG mode and then see after which
lines in the logs it crashes?
Wido
>
gt;
> So, almost alsways it's exception after RbdUnprotect then in approx
> . 20 minutes - crash..
> Almost all the time - it's happen after GetVmStatsCommand or Disks
> stats... Possible that evil hiden into UpadteDiskInfo method... but
> i can;t find any bad
t; "control_pool": ".eu-zone1.rgw.control",
>> "gc_pool": ".eu-zone1.rgw.gc",
>> "log_pool": ".eu-zone1.log",
>> "intent_log_pool": ".eu-zone1.intent-log",
>> "usage_log_pool&q
por direitos autorais. A divulgação, distribuição,
>> reprodução ou qualquer forma de utilização do teor deste documento
>> depende de autorização do emissor, sujeitando-se o infrator às sanções
>> legais. Caso esta comunicação tenha sido recebida por engano, favor
>> avisar imed
On 02-11-15 12:30, Loris Cuoghi wrote:
> Hi All,
>
> We're currently on version 0.94.5 with three monitors and 75 OSDs.
>
> I've peeked at the decompiled CRUSH map, and I see that all ids are
> commented with '# Here be dragons!', or more literally : '# do not
> change unnecessarily'.
>
>
On 02-11-15 11:56, Jan Schermer wrote:
> Can those hints be disabled somehow? I was battling XFS preallocation
> the other day, and the mount option didn't make any difference - maybe
> because those hints have precedence (which could mean they aren't
> working as they should), maybe not.
>
ists.ceph.com] On Behalf Of Wido
> den Hollander
> Sent: October-28-15 5:49 AM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph OSDs with bcache experience
>
>
>
> On 21-10-15 15:30, Mark Nelson wrote:
>>
>>
>> On 10/21/2015 01:59 AM, Wido den H
ill get back the RBD image by reverting it in
a special way. With a special cephx capability for example.
This goes a bit in the direction of soft pool-removals as well, it might
be combined.
Comments?
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 990
Hi,
Not to complain or flame about it, but I see a lot of messages which are
being send to both users and ceph-devel.
Imho that beats the purpose of having a users and a devel list, isn't it?
The problem is that messages go to both lists and users hit reply-all
again and so it continues.
For
On 14-10-15 16:30, Björn Lässig wrote:
> On 10/13/2015 11:01 PM, Sage Weil wrote:
>> http://download.ceph.com/debian-testing
>
> unfortunately this site is not reachable at the moment.
>
>
> $ wget http://download.ceph.com/debian-testing/dists/wheezy/InRelease -O -
> --2015-10-14
exist.
Any objections against mirroring the pubkey there as well? If not, could
somebody do it?
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 10/14/2015 06:50 PM, Björn Lässig wrote:
> On 10/14/2015 05:11 PM, Wido den Hollander wrote:
>>
>>
>> On 14-10-15 16:30, Björn Lässig wrote:
>>> On 10/13/2015 11:01 PM, Sage Weil wrote:
>>>> http://download.ceph.com/debian-testing
>>>
>>
On 10/14/2015 06:50 PM, Björn Lässig wrote:
> On 10/14/2015 05:11 PM, Wido den Hollander wrote:
>>
>>
>> On 14-10-15 16:30, Björn Lässig wrote:
>>> On 10/13/2015 11:01 PM, Sage Weil wrote:
>>>> http://download.ceph.com/debian-testing
>>>
>>
On 15-10-15 13:56, Luis Periquito wrote:
> I've been trying to find a way to limit the number of request an user
> can make the radosgw per unit of time - first thing developers done
> here is as fast as possible parallel queries to the radosgw, making it
> very slow.
>
> I've looked into
you may reply to the sender and should
> delete this e-mail immediately.
> ---
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
> den Hollander
> Sent: Thursday, October 08, 2015 10:06 PM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] How to improve 'rbd ls [pool]' response time
>
> On 10/08/2015 10:46 AM,
Hi,
In the "newstore direction" thread on ceph-devel I wrote that I'm using
bcache in production and Mark Nelson asked me to share some details.
Bcache is running in two clusters now that I manage, but I'll keep this
information to one of them (the one at PCextreme behind CloudStack).
In this
On 10/20/2015 07:44 PM, Mark Nelson wrote:
> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
>> Hi,
>>
>> In the "newstore direction" thread on ceph-devel I wrote that I'm using
>> bcache in production and Mark Nelson asked me to share some details.
>&g
y allocate only 1TB of the
SSD and leave 200GB of cells spare so the Wear-Leveling inside the SSD
has some spare cells.
Wido
>
> ---- Original message
> From: Wido den Hollander <w...@42on.com>
> Date: 20/10/2015 16:00 (GMT+01:00)
> To: ceph-users <c
On 08-07-15 12:20, Daniel Schneller wrote:
Hi!
Just a quick question regarding mixed versions. So far a cluster is
running on 0.94.1-1trusty without Rados Gateway. Since the packets have
been updated in the meantime, installing radosgw now would entail
bringing a few updated dependencies
hard copies
or electronically stored copies).
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype
301 - 400 of 1153 matches
Mail list logo