Hello,
That is what I was thinking of doing as a plan B, just require more overhead on
configuration of the VM, where cache tier does not require any per VM creation
work.
I guess as Christian said depends on the work load to which is the best option.
,Ashley
-Original Message-
From:
> Op 27 oktober 2016 om 11:23 schreef Ralf Zerres :
>
>
> Hello community,
> hello ceph developers,
>
> My name is Ralf working as IT-consultant. In this paticular case I do support
> a
> german customer running a 2 node CEPH cluster.
>
> This customer is
Hi all,
We have a ceph cluster only use rbd. The cluster contains several
group machines, each group contains several machines, then each
machine has 12 SSDs, each ssd as an OSD (journal and data together).
eg:
group1: machine1~machine12
group2: machine13~machine24
..
each group is separated
On Thu, Oct 27, 2016 at 6:30 PM, Florent B wrote:
> Hi everyone,
>
> I would like to know why "remount" option is not handled on ceph-fuse ?
> Is it a technical restriction, or is it just not implemented yet ?
>
remount handler of fuse kernel module does not do anything. If
Hello community,
hello ceph developers,
My name is Ralf working as IT-consultant. In this paticular case I do support a
german customer running a 2 node CEPH cluster.
This customer is struggeling with a desasterous situation, where a full pool of
rbd-data (about 12 TB valid production-data) is
Hi Cephers,
In my period list I am seeing an orphan period
{
"periods": [
"24dca961-5761-4bd1-972b-685a57e2fcf7:staging",
"a5632c6c4001615e57e587c129c1ad93:staging",
"fac3496d-156f-4c09-9654-179ad44091b9"
]
}
The realm a5632c6c4001615e57e587c129c1ad93 no longer
On Thu, Oct 27, 2016 at 12:30 PM, Richard Chan
wrote:
> Hi Cephers,
>
> In my period list I am seeing an orphan period
>
> {
> "periods": [
> "24dca961-5761-4bd1-972b-685a57e2fcf7:staging",
> "a5632c6c4001615e57e587c129c1ad93:staging",
>
> Op 27 oktober 2016 om 11:46 schreef Ralf Zerres :
>
>
> Here we go ...
>
>
> > Wido den Hollander hat am 27. Oktober 2016 um 11:35
> > geschrieben:
> >
> >
> >
> > > Op 27 oktober 2016 om 11:23 schreef Ralf Zerres :
> > >
> >
> Wido den Hollander hat am 27. Oktober 2016 um 12:37
> geschrieben:
>
>
> Bringing back to the list
>
> > Op 27 oktober 2016 om 12:08 schreef Ralf Zerres :
> >
> >
> > > Wido den Hollander hat am 27. Oktober 2016 um 11:51
> > >
>>> Christian Balzer schrieb am Donnerstag, 27. Oktober 2016 um
04:07:
Hi,
> Hello,
>
> On Wed, 26 Oct 2016 15:40:00 + Ashley Merrick wrote:
>
>> Hello All,
>>
>> Currently running a CEPH cluster connected to KVM via the KRBD and used only
> for this purpose.
>>
>> Is
Bringing back to the list
> Op 27 oktober 2016 om 12:08 schreef Ralf Zerres :
>
>
> > Wido den Hollander hat am 27. Oktober 2016 um 11:51
> > geschrieben:
> >
> >
> >
> > > Op 27 oktober 2016 om 11:46 schreef Ralf Zerres :
> > >
>
Hi,
from my little experience (it's really not much) you should not store
the images as qcow, but I'm not sure if it depends on your purpose.
I have an Openstack environment and when I tried to get it running
with Ceph I uploaded my images as qcow2. But then I had problems
getting VMs
Hi all,
One of our 10.2.3 clusters has a pool with bogus statistics. The pool
is empty, but it shows 8E of data used and 2^63-1 objects.
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
test 19 8E 0 1362T 9223372036854775807
Are these problems fixed in the latest version of the Debian packages?
I'm a fairly large user with a lot of existing data stored in .rgw.buckets
pool, and I'm running Hammer.
I just hope that upgrading to Jewel so long after its release will not cause
loss of access to this data for my users,
Hi all,
I am getting very strange results while testing Ceph performance using fio
from KVM guest VM. My environment is as follows:
- Ceph version 0.94.5
- 4x Storage Nodes, 36 SATA disks each, journals on SSD. 10Gbps links
- KVM as hypervisor (qemu-kvm 2.0.0+dfsg-2ubuntu1.22) on
Looking to add CephFS into our Ceph cluster (10.2.3), and trying to plan for
that addition.
Currently only using RADOS on a single replicated, non-EC, pool, no RBD or RGW,
and segmenting logically in namespaces.
No auth scoping at this time, but likely something we will be moving to in the
Hi Dan...
Have you tried 'rados df' to see if it agrees with 'ceph df' ?
Cheers
G.
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Dan van der
Ster [d...@vanderster.com]
Sent: 28 October 2016 03:01
To: ceph-users
Subject: [ceph-users]
Hello!
I have 5 nodes , 3 with osd 1 from which contain also mon and mds .
And 2 with mds and mon only.
After migration to Ubuntu 16.04 LTS from 14.04 LTS i've rebuild my cluster
and got error on mount as kernel driver "can't read superblock".
I've tried to mount as fuse but not solve this
On Wed, Oct 26, 2016 at 11:43:15AM +0200, Trygve Vea wrote:
> Hi!
>
> I'm trying to get s3website working on one of our Rados Gateway
> installations, and I'm having some problems finding out what needs to
> be done for this to work. It looks like this is a halfway secret
> feature, as I can
So i couldn't actually wait till the morning
I sat rbd cache to false and tried to create the same number of instances,
but the same issue happened again.
I want to note, that if i rebooted any of the virtual machines that has
this issue, it works without any problem afterwards.
Does this mean
We didn’t enable rbd caching.
The OS file system which we met this issue includes : ext3 , ext4 , xfs , NTFS
( Windows )
[cid:image007.jpg@01D1747D.DB260110]
Keynes Lee李 俊 賢
Direct:
+886-2-6612-1025
Mobile:
+886-9-1882-3787
Fax:
+886-2-6612-1991
E-Mail:
Well, for me know i know my issue
It is indeed an over-utilization issue, but bot related to ceph
Th cluster is connected via a 1g interfaces, which basixally gets saturated
by all the bandwidth generated from theae instances trying to read their
root filesystems and mount it, which is stored in
On Thu, Oct 27, 2016 at 6:29 PM, Ahmed Mostafa
wrote:
> I sat rbd cache to false and tried to create the same number of instances,
> but the same issue happened again
>
Did you disable it both in your hypervisor node's ceph.conf and also
disable the cache via the QEMU
I think this issue may not related to your poor hardware.
Our cluster has 3 Ceph monitor and 4 OSD.
Each server has
2 cpu ( Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz ) , 32 GB memory
OSD nodes has 2 SSD for journal disks and 8 SATA disks ( 6TB / 7200 rpm )
ALL of them were connected to each
On Thu, Oct 27, 2016 at 9:59 AM, Erick Perez - Quadrian Enterprises <
epe...@quadrianweb.com> wrote:
>
>
> On Thu, Oct 27, 2016 at 9:19 AM, Richard Chan <
> rich...@treeboxsolutions.com> wrote:
>
>> Dell N40xx is cheap if you buy the RJ-45 versions.It will save a lot on
>> optics.
>> Our NICs
On Oct 27, 2016 3:16 PM, "Oliver Dzombic" wrote:
>
> Hi,
>
> i can recommand
>
> X710-DA2
>
Weer also use this nice for everything.
> 10G Switch is going over our bladeinfrastructure, so i can't recommand
> something for you there.
>
> I assume that the usual
The only effect I could see out of a highly overloaded system would be that
the OSDs might appear to become unresponsive to the VMs. Are any of you
using cache tiering or librbd cache? For the latter, there was one issue
[1] that can result in read corruption that affects hammer and prior
Hello,
Going through the documentation I am aware that I should be using RAW
images instead of Qcow2 when storing my images in ceph.
I have carried a small test to understand how this is going.
[root@ ~]# qemu-img create -f qcow2 test.qcow2 100G
Formatting 'test.qcow2', fmt=qcow2
Dell N40xx is cheap if you buy the RJ-45 versions.It will save a lot on
optics.
Our NICs are all RJ-45. We use CAT7 and the switch hasn't given any
problems.
We have one with an SFP+ uplink, and used a Dell-branded DAC cable that
has also worked flawlessly.
(Not to be confused with Dell
Hi,
i can recommand
X710-DA2
10G Switch is going over our bladeinfrastructure, so i can't recommand
something for you there.
I assume that the usual juniper/cisco will do a good job. I think, for
ceph, the switch is not the major point of a setup.
--
Mit freundlichen Gruessen / Best regards
How do you clean up the :staging object? The orphan_uuid
belongs to a realm that was accidentally created here. It was later moved
to another rgw realm root pool. This remnant has no periods only the
staging object.
radosgw-admin --cluster flash period list
{
"periods": [
Arista has been good to us. Netgear or eBay if you have to scrape bottom.
Rob
Sent from my iPhone
> On Oct 27, 2016, at 9:16 AM, Oliver Dzombic wrote:
>
> Hi,
>
> i can recommand
>
> X710-DA2
>
> 10G Switch is going over our bladeinfrastructure, so i can't
>>> Christian Balzer schrieb am Donnerstag, 27. Oktober
2016 um
13:55:
Hi Christian,
>
> Hello,
>
> On Thu, 27 Oct 2016 11:30:29 +0200 Steffen Weißgerber wrote:
>
>>
>>
>>
>> >>> Christian Balzer schrieb am Donnerstag, 27.
Oktober 2016 um
>> 04:07:
>>
>>
Hello everybody,
I want to upgrade my small ceph cluster to 10Gbit networking and would
like some recommendation from your experience.
What is your recommend budget 10Gbit switch suitable for Ceph?
I would like to use X550-T1 intel adapters in my nodes.
Or is fibre recommended?
X520-DA2
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Oliver Dzombic
> Sent: 27 October 2016 14:16
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade
>
> Hi,
>
> i can recommand
>
>
Hi Jelle,
On 10/27/2016 03:04 PM, Jelle de Jong wrote:
Hello everybody,
I want to upgrade my small ceph cluster to 10Gbit networking and would
like some recommendation from your experience.
What is your recommend budget 10Gbit switch suitable for Ceph?
We are running a 3-node cluster, with
What is the issue exactly?
On Fri, Oct 28, 2016 at 2:47 AM, wrote:
> I think this issue may not related to your poor hardware.
>
>
>
> Our cluster has 3 Ceph monitor and 4 OSD.
>
>
>
> Each server has
>
> 2 cpu ( Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz ) , 32 GB
Hello,
On Fri, 28 Oct 2016 12:42:40 +0800 (CST) james wrote:
> Hi,
>
> Is there anyone in the community has experience of using "bcache" as backend
> of Ceph?
If you search the ML archives (via google) you should find some past
experiences, more recent ones not showing particular great
Hi,
Is there anyone in the community has experience of using "bcache" as backend of
Ceph?
Nowadays, maybe most Ceph solution are based on full-SSD or full-HDD as backend
data disks. So in order
to balance the cost and performance/capacity, we are trying the hybrid solution
with "bcache". It
Hi Cephers,
I was wondering whether you could share your:
1. Use cases for multiple realms
2. Do your realms use the same rgw_realm_root_pool or do you use separate
rgw_realm_root_pool per realm.
--
Richard Chan
___
ceph-users mailing list
- Den 27.okt.2016 23:13 skrev Robin H. Johnson robb...@gentoo.org:
> On Wed, Oct 26, 2016 at 11:43:15AM +0200, Trygve Vea wrote:
>> Hi!
>>
>> I'm trying to get s3website working on one of our Rados Gateway
>> installations, and I'm having some problems finding out what needs to
>> be done for
41 matches
Mail list logo