Il 11 ott 2016 3:05 AM, "Christian Balzer" ha scritto:
> 10Gb/s MC-LAG (white box) switches are also widely available and
> affordable.
>
At which models are you referring to?
I've never found any 10gb switches at less than many thousands euros.
The cheaper ones i've found are the Cisco small bu
Hello,
On Tue, 11 Oct 2016 01:07:16 + Cory Hawkless wrote:
> Thanks Jason, works perfectly.
>
> Do you know if ceph blocks the client IO until the journal has acknowledged
> it's write? I.E can I store my journal on slower disks or will that have a
> negative impact on performance?
>
Know
Thanks Jason, works perfectly.
Do you know if ceph blocks the client IO until the journal has acknowledged
it's write? I.E can I store my journal on slower disks or will that have a
negative impact on performance?
Is there perhaps a hole in the documentation here? I've not been able to find
an
Hello,
On Mon, 10 Oct 2016 14:56:40 +0200 Matteo Dacrema wrote:
> Hi,
>
> I’m planning a similar cluster.
> Because it’s a new project I’ll start with only 2 node cluster witch each:
>
As Wido said, that's a very dense and risky proposition for a first time
cluster.
Never mind the lack of 3rd
Hi Mike...
I was hoping that someone with a bit more experience would answer you since I
never had similar situation. So, I'll try to step in and help.
The peering process means that the OSDs are agreeing on the state of objects in
the PGs they share. The peering process can take some time and
Yes, the "journal_data" objects can be stored in a separate pool from
the image. The rbd CLI allows you to use the "--journal-pool" argument
when creating, copying, cloning, or importing and image with
journaling enabled. You can also specify the journal data pool when
dynamically enabling the jour
The default it uses can be controlled in your ceph.conf file. The ceph-deploy
tool is a generic ceph deployment tool which does not have presets for rados
gateway deployments or other specific deployments. When creating pools you can
specify the amount of pgs in them with the tool so that it d
Hi list, the status of calamari and friends post 1.3 seems a bit confusing
to me.
What are you folks using for monitoring in the Jewel era?
(Could someone explain the big picture of the state of 1.4)
Here's what I have gathered:
1. romana (was calamari-clients) seems dead: no commits in a year?
On Mon, Oct 10, 2016 at 9:06 AM, Davie De Smet
wrote:
> Hi,
>
>
>
> I don’t want to hijack this topic but the behavior described below is the
> same as what I am seeing:
>
>
>
>
>
> [root@osd5-freu ~]# ceph daemonperf /var/run/ceph/ceph-mds.osd5-freu.asok
> |
>
> -mds-- --mds_server-- ---o
David,
Thanks for the info. I am getting an understanding of how this works.
Now I used the ceph-deploy tool to create the rgw pools. It seems then that the
tool isn’t the best at creating the pools necessary for an rgw gateway as it
made all of them the default sizes for pg_num/pgp_num
Perhaps,
You have 11 pools with 256 pgs, 1 pool with 128 and 1 pool with 64... that's
3,008 pgs in your entire cluster. Multiply that number by your replica size
and divide by how many OSDs you have in your cluster and you'll see what your
average PGs per osd is. Based on the replica size you shared, t
Hello
We had an issue with OSDs and RAID Cards. I have recovered all but 1 pg
by allowing ceph to recover on its own.
But I have 1 PG which wasn't replicated, so I exported it before I took
the OSD totally out.
I have tried to import it using the objectstore tool, but no matter
where I imp
Ok, this is an odd one to me...
I have several pools, ALL of them are set with pg_num and pgp_num = 256. Yet,
the warning about too many PGs per OSD is showing up.
Here are my pools:
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 256 pgp_num 256 last_change
> Op 10 oktober 2016 om 14:56 schreef Matteo Dacrema :
>
>
> Hi,
>
> I’m planning a similar cluster.
> Because it’s a new project I’ll start with only 2 node cluster witch each:
>
2 nodes in a Ceph cluster is way to small in my opinion.
I suggest that you take a lot more smaller nodes with l
Hi,
I don’t want to hijack this topic but the behavior described below is the same
as what I am seeing:
[root@osd5-freu ~]# ceph daemonperf /var/run/ceph/ceph-mds.osd5-freu.asok
|
-mds---
Hi Orit,
That could well be related - as mentioned, we do have a hammer radosgw
still running, and I have also run radosgw-admin on that system while
trying to understand what changed between the two releases!
So reading that bug report, it sounds like having the hammer radosgw
itself runnin
Hi Graham,
Is there a chance you have old radosgw-admin (hammer) running?
You may encountered http://tracker.ceph.com/issues/17371
If an hammer radosgw-admin runs on the jewel radosgw it corrupts the
configuration.
We are working on a fix for that.
Orit
On Fri, Oct 7, 2016 at 9:37 PM, Graham All
Hi,
while using client ceph also on gentoo and because I'm a friend of building from
source within a ram based filesystem since ceph release 9.x i'm wondering about
the
exorbitant space requirements when buildung the ceph components.
Until hammer 3GB where sufficient to complete the compile.
>F
On Fri, Oct 7, 2016 at 9:37 PM, Graham Allan wrote:
> Dear Orit,
>
> On 10/07/2016 04:21 AM, Orit Wasserman wrote:
>>
>> Hi,
>>
>> On Wed, Oct 5, 2016 at 11:23 PM, Andrei Mikhailovsky
>> wrote:
>>>
>>> Hello everyone,
>>>
>>> I've just updated my ceph to version 10.2.3 from 10.2.2 and I am no
>>>
I was upgrading a really old cluster from Infernalis (9.2.1) to Jewel
(10.2.3) and got some weird, but interesting issues. This cluster
started its life with Bobtail -> Dumpling -> Emperor -> Firefly ->
Giant -> Hammer -> Infernalis and now Jewel.
When I upgraded the first MON (out of 3) everythin
Hi,
I’m planning a similar cluster.
Because it’s a new project I’ll start with only 2 node cluster witch each:
2x E5-2640v4 with 40 threads total @ 3.40Ghz with turbo
24x 1.92 TB Samsung SM863
128GB RAM
3x LSI 3008 in IT mode / HBA for OSD - 1 each 8 OSD/SDDs
2x SSD for OS
2x 40Gbit/s NIC
What
Hi,
In the end this was tracked back to a switch MTU problem, once that was
fixed any version of ceph-deploy osd prepair/create worked as expected.
Thanks
On Mon, Oct 10, 2016 at 11:02 AM, Eugen Block wrote:
> Did the prepare command succeed? I don't see any output referring to
> 'ceph-deploy
On 10/10/2016 12:22 PM, Ilya Dryomov wrote:
> On Fri, Oct 7, 2016 at 1:40 PM, Nikolay Borisov wrote:
>> Hello,
>>
>> I've encountered yet another cephfs crash:
>>
>> [990188.822271] BUG: unable to handle kernel NULL pointer dereference at
>> 001c
>> [990188.822790] IP: [] __free_pag
Did the prepare command succeed? I don't see any output referring to
'ceph-deploy osd prepare'. If this command also fails maybe there's a
hint, and the activate command is only a consequence of that failure?
Zitat von Alan Johnson :
I did have some similar issues and resolved it by install
Hi, with the move to calamari-server 1.4.8 some questions:
1. Are we still using the webapp calamari-clients/romana 1.3?
Does the version number skew matter?
2. Previously there were carbon-cache.py and cthulhu-manager in supervisor.
Now there is calamari-lite. Are the previous two superceded by
Order by your advice, I install ganesha on node1,but there be a same problem.
librgw init failed (-5)
node1:
ps -ef | grep ceph
ceph 1099 1 0 17:34 ?00:00:00 /usr/bin/radosgw -f
--clusterceph --name client.rgw.node1 --setuser ceph --setgroup ceph
[cep@node1 ~]$ sudo rados -
On Mon, Oct 10, 2016 at 9:05 AM, Hauke Homburg wrote:
> Am 07.10.2016 um 17:37 schrieb Gregory Farnum:
>> On Fri, Oct 7, 2016 at 7:15 AM, Hauke Homburg
>> wrote:
>>> Hello,
>>>
>>> I have a Ceph Cluster with 5 Server, and 40 OSD. Aktual on this Cluster
>>> are 85GB Free Space, and the rsync dir
On Fri, Oct 7, 2016 at 1:40 PM, Nikolay Borisov wrote:
> Hello,
>
> I've encountered yet another cephfs crash:
>
> [990188.822271] BUG: unable to handle kernel NULL pointer dereference at
> 001c
> [990188.822790] IP: [] __free_pages+0x5/0x30
> [990188.823090] PGD 180dd8f067 PUD 1bf272
Can you provide a 'ceph health detail'
On 9 Oct 2016 3:56 p.m., "Mike Jacobacci" wrote:
Hi,
Yesterday morning I added two more OSD nodes and changed the crushmap from
disk to node. It looked to me like everything went ok besides some disks
missing that I can re-add later, but the cluster status
Am 07.10.2016 um 17:37 schrieb Gregory Farnum:
> On Fri, Oct 7, 2016 at 7:15 AM, Hauke Homburg wrote:
>> Hello,
>>
>> I have a Ceph Cluster with 5 Server, and 40 OSD. Aktual on this Cluster
>> are 85GB Free Space, and the rsync dir has lots of Pictures and a Data
>> Volume of 40GB.
>>
>> The Linux
30 matches
Mail list logo