tombo to...@scs.sk schrieb am Dienstag, 9. Juni 2015 um 21:44:
Hello guys,
Hi tombo,
that seem's to be related to http://tracker.ceph.com/issues/4282. We had the
same effects but limited by 1 hour. After that the authentication works again.
When increasing the log level when the
Hi,
I'm searching for actual packages for SLES11 SP3.
Via SMT-Updateserver it seems that there's only Version 0.80.8 available. Are
there
other package sources available (at least for Giant)?
What I want to do is mount ceph via rbd map natively instead mounting nfs from
another host
on which
Andrey Korolyov and...@xdel.ru schrieb am Mittwoch, 10. Juni 2015 um
15:29:
On Wed, Jun 10, 2015 at 4:11 PM, Pavel V. Kaygorodov pa...@inasan.ru wrote:
Hi,
for us a restart of the monitor solved this.
Regards
Steffen
Hi!
Immediately after a reboot of mon.3 host its clock was
Based on the book 'Learning Ceph'
(https://www.packtpub.com/application-development/learning-ceph),
chapter performance tuning, we swapped the values for osd_recovery_op_priority
and osd_client_op_priority to 60 and 40.
"... osd recovery op priority: This is
the priority set for recovery
>>> Dan van der Ster <d...@vanderster.com> schrieb am Mittwoch, 23.
September 2015
um 14:04:
> On Wed, Sep 23, 2015 at 1:44 PM, Steffen Weißgerber
> <weissgerb...@ksnb.de> wrote:
>> "... osd recovery op priority: This is
>> the priority set for
>>> Bill WONG schrieb am Donnerstag, 28. Januar
2016 um
09:30:
> Hi Marius,
>
Hello,
> with ceph rdb, it looks can support qcow2 as well as per its
document: -
> http://docs.ceph.com/docs/master/rbd/qemu-rbd/
> --
> Important The raw data format is really the only
>>> Gregory Farnum schrieb am Montag, 8. Februar 2016 um
19:10:
> On Mon, Feb 8, 2016 at 10:00 AM, Dzianis Kahanovich
> wrote:
>> I want to know about plain (not systemd, no deployment tools, only own
> simple
>> "start-stop-daemon" scripts under
Hi Daniel,
we had the same problem with a SataDom on our ceph nodes. After write errors
the root partition was mounted read only and the monitor died because logging
was not possible anymore. Instead the osd's kept running.
For minimal downtime of the node I backuped the system disk via ssh to
>>> Christian Balzer schrieb am Dienstag, 12. April 2016 um
>>> 01:39:
> Hello,
>
Hi,
> I'm officially only allowed to do (preventative) maintenance during weekend
> nights on our main production cluster.
> That would mean 13 ruined weekends at the realistic rate of 1 OSD
Hi,
that's how I did it for my osd's 25 to 30 (you can add as much as osd
numbers you like as long
you have free space).
First you can reweight the osd's to 0 to move their copies to other
osd's
for i in {25..30};
do
ceph osd crush reweight osd.$i
done
and have to wait until it's done (when
Hello,
I tried to configure ceph logging to a remote syslog host based on
Sebastian Han's Blog
(http://www.sebastien-han.fr/blog/2013/01/07/logging-in-ceph/):
ceph.conf
[global]
...
log_file = none
log_to_syslog = true
err_to_syslog = true
[mon]
mon_cluster_log_to_syslog = true
>>> Christian Balzer <ch...@gol.com> schrieb am Donnerstag, 14. April
2016 um
17:00:
> Hello,
>
> [reduced to ceph-users]
>
> On Thu, 14 Apr 2016 11:43:07 +0200 Steffen Weißgerber wrote:
>
>>
>>
>> >>> Christian Balzer <ch.
>>> "leon...@gstarcloud.com" schrieb am
Freitag, 15. April
2016 um 11:33:
> Hello Daniel,
>
> I'm a newbie to Ceph, and when i config the storage cluster on CentOS
7 VMs,
> i encontered the same problem as you posted on
>
>>> Christian Balzer schrieb am Dienstag, 12. Juli 2016 um
>>> 08:47:
> Hello,
>
> On Tue, 12 Jul 2016 08:39:16 +0200 (CEST) Wido den Hollander wrote:
>
>> Hi,
>>
>> I am upgrading a 1800 OSD cluster from Hammer 0.94.5 to 0.94.7 prior to
> going to Jewel and while doing so
>>> Gaurav Goyal schrieb am Mittwoch, 20. Juli 2016
>>> um
17:41:
> Dear Ceph User,
>
Hi,
> I want to ask a very generic query regarding ceph.
>
> Ceph does use .raw format. But every single company is providing qcow2
> images.
> It takes a lot of time to convert
>>> Christian Balzer <ch...@gol.com> schrieb am Donnerstag, 14. Juli
2016 um
17:06:
> Hello,
>
> On Thu, 14 Jul 2016 13:37:54 +0200 Steffen Weißgerber wrote:
>
>>
>>
>> >>> Christian Balzer <ch...@gol.com> schrieb am Donners
>>> Jake Young schrieb am Donnerstag, 30. Juni 2016
um 00:28:
> On Wednesday, June 29, 2016, Mike Jacobacci
wrote:
>
Hi,
>> Hi all,
>>
>> Is there anyone using rbd for xenserver vm storage? I have
XenServer 7
>> and the latest Ceph, I am looking for the
>>> Josef Johansson schrieb am Donnerstag, 30. Juni 2016 um
15:23:
> Hi,
>
Hi,
> You could actually managed every osd and mon and mds through docker swarm,
> since all just software it make sense to deploy it through docker where you
> add the disk that is needed.
>
>
Hi,
>>> Wido den Hollander schrieb am Dienstag, 9. August 2016 um
10:05:
>> Op 8 augustus 2016 om 16:45 schreef Martin Palma :
>>
>>
>> Hi all,
>>
>> we are in the process of expanding our cluster and I would like to
>> know if there are some best
Hello,
after correcting the configuration for different qemu vm's with rbd disks
(we removed the cache=writethrough option to have the default
writeback mode) we have a strange behaviour after restarting the vm's.
For most of them the cache mode is now writeback as expected. But some
neverthless
>>> Loris Cuoghi <l...@stella-telecom.fr> schrieb am Dienstag, 30. August
2016 um
16:34:
> Hello,
>
Hi Loris,
thank you for your answer.
> Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit :
>> Hello,
>>
>> after correcting the configuration for
t rbd_cache_writethrough_until_flush force
> writethrough in this case.
That makes sense. The time reference of the bugreport matches the
driver version 61.63.103.3000
from 03.07.2012 distributed with virtio-win-0.1-30.iso from Fedora.
Thank you.
Regards
>
> - Mail original -
>
Hi,
while using client ceph also on gentoo and because I'm a friend of building from
source within a ram based filesystem since ceph release 9.x i'm wondering about
the
exorbitant space requirements when buildung the ceph components.
Until hammer 3GB where sufficient to complete the compile.
>>> Christian Balzer schrieb am Donnerstag, 27. Oktober 2016 um
04:07:
Hi,
> Hello,
>
> On Wed, 26 Oct 2016 15:40:00 + Ashley Merrick wrote:
>
>> Hello All,
>>
>> Currently running a CEPH cluster connected to KVM via the KRBD and used only
> for this purpose.
>>
>> Is
>>> Wido den Hollander <w...@42on.com> schrieb am Samstag, 22. Oktober
2016 um
08:35:
>> Op 21 oktober 2016 om 21:31 schreef Steffen Weißgerber
<weissgerb...@ksnb.de>:
>>
>>
>> Hello,
>>
>> we're running a 6 node ceph cluster with
ays and will be back at the office
tomorrow.
Thank you for you help.
Regards
Steffen
> On Sat, Oct 22, 2016 at 6:57 AM, Ruben Kerkhof
<ru...@rubenkerkhof.com> wrote:
>> On Fri, Oct 21, 2016 at 9:31 PM, Steffen Weißgerber
>> <weissgerb...@ksnb.de> wrote:
>>> He
Hi,
>>> Ruben Kerkhof <ru...@rubenkerkhof.com> schrieb am Samstag, 22.
Oktober 2016 um
12:57:
> On Fri, Oct 21, 2016 at 9:31 PM, Steffen Weißgerber
> <weissgerb...@ksnb.de> wrote:
>> Hello,
>>
>> we're running a 6 node ceph cluster with 3 mons on
Hello,
we're running a 6 node ceph cluster with 3 mons on Ubuntu (14.04.4).
Sometimes it happen's that the mon services die and have to restarted
manually.
To have reliable service restarts I normally use D.J. Bernsteins deamontools
on other Linux distributions. Until now I never did this on
>>> Robert Sander <r.san...@heinlein-support.de> schrieb am Mittwoch,
16. November
2016 um 10:23:
> On 16.11.2016 09:05, Steffen Weißgerber wrote:
>> Hello,
>>
Hello,
>> we started upgrading ubuntu on our ceph nodes to Xenial and had to
see that
> d
Hello,
we started upgrading ubuntu on our ceph nodes to Xenial and had to see that
during
the upgrade ceph automatically was upgraded from hammer to jewel also.
Because we don't want to upgrade ceph and the OS at the same time we deinstalled
the ceph jewel components reactivated
>>> Robert Sander <r.san...@heinlein-support.de> schrieb am Mittwoch,
16. November
2016 um 10:23:
> On 16.11.2016 09:05, Steffen Weißgerber wrote:
>> Hello,
>>
Hello,
>> we started upgrading ubuntu on our ceph nodes to Xenial and had to
see that
> d
Hi,
after doing 'apt-mark hold ceph' the upgrade failed.
It seems due to some kind of fetch failed:
...
OK http://archive.ubuntu.com trusty-backports/universe amd64 Packages
Fehl http://ceph.com xenial/main Translation-en
config
as before the upgrade with iburts entries to local time server and the
rest of the cluster nodes.
Regards
Steffen
>>> Robert Sander <r.san...@heinlein-support.de> 16.11.2016 10:23 >>>
On 16.11.2016 09:05, Steffen Weißgerber wrote:
> Hello,
>
> we started u
Hi,
looks good.
Because I've made an image fo the node's system disk I can revert to
the state before the upgrade and restart the hole process.
Thank you.
Steffen
>>> "钟佳佳" 16.11.2016 09:32 >>>
hi :
you can google apt-mark
apt-mark hold PACKAGENAME
>>> 马忠明 schrieb am Sonntag, 20. November 2016 um
12:16:
> Hi guys,
> So our cluster always got osd down due to medium error.Our current
action
> plan is to replace the defective disk drive.But I was wondering
whether it's
> too sensitive for ceph to take it down.Or whether
>>> Christian Balzer <ch...@gol.com> schrieb am Donnerstag, 27. Oktober
2016 um
13:55:
Hi Christian,
>
> Hello,
>
> On Thu, 27 Oct 2016 11:30:29 +0200 Steffen Weißgerber wrote:
>
>>
>>
>>
>> >>> Christian Balzer <
Hello,
some time ago I upgraded our 6 node cluster (0.94.9) running on Ubuntu from
Trusty
to Xenial.
The problem here was that with the os update also ceph is upgraded what we did
not want
in the same step because then we had to upgrade all nodes at the same time.
Therefore we did it node by
blocken. Das müsste auch wieder freigegeben werden.
Vielen Dank für Ihre Unterstützung
Mit freundlichen Grüßen
Steffen Weißgerber
IT-Zentrum
--
Klinik-Service Neubrandenburg GmbH
Allendestr. 30, 17036 Neubrandenburg
Amtsgericht Neubrandenburg, HRB 2457
Geschaeftsfuehrerin: Gudrun Kappich
Hi,
checking the actual value for osd_max_backfills at our cluster (0.94.9)
I also made a config diff of the osd configuration (ceph daemon osd.0
config diff) and wondered why there's a displayed default of 10 which
differs from the documented default at
ets to mount cephfs on boot.
>
> Thanks again for your replies and questions what directed me to a
right way
> and let found a workaround!
>
> Steffen Weißgerber wrote on 01/02/18 13:30:
>> Ok, so it seems that all things necessary for mount are configured
for
>> now bu
Hello,
and what happens when you mount it manually using the fstab entry with 'mount
/mnt/ceph'?
Regards
Steffen
>>> schrieb am Mittwoch, 31. Januar 2018 um 16:19:
> Hello!
>
> I need to mount automatically cephfs on KVM VM boot .
>
> I tried to follow recommendations
/rc.log?
Regards
Steffen
>>> <kna...@gmail.com> schrieb am Donnerstag, 1. Februar 2018 um
09:24:
> Hello, Steffen!
>
> Thanks for reply!
>
> Please, see my comments inline.
>
> Steffen Weißgerber wrote on 01/02/18 11:16:
>> Hello,
>>
>> and w
Hello,
>>> Kevin Olbrich schrieb am Donnerstag, 8. Februar 2018 um
>>> 12:54:
> 2018-02-08 11:20 GMT+01:00 Martin Emrich :
>
>> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
>> running linux-generic-hwe-16.04
Hello Kai,
we use RBD's as part of pacemaker resource groups for 2 years on Hammer
with no
problems.
The resource is always configured in active/passive mode due to the
fact that the
filesystem is not cluster aware. Therefore during switchover the RBD's
are unmapped
cleanly on the active node
44 matches
Mail list logo