On Sun, Dec 14, 2014 at 10:38 AM, Kevin Shiah agan...@gmail.com wrote:
Hello All,
Does anyone know how to configure data stripping when using ceph as file
system? My understanding is that configuring stripping with rbd is only for
block device.
You should be able to set layout.* xattrs on
On 15/12/14 20:54, Vivek Varghese Cherian wrote:
Hi,
Do I need to overwrite the existing .db files and .txt file in
/var/lib/nssdb on the radosgw host with the ones copied from
/var/ceph/nss on the Juno node ?
Yeah - worth a try (we want to rule out any
Hi Christian,
We’re using Proxmox that has support for HA, they do it per-vm.
We’re doing it manually right now though, because we like it :).
When I looked at it I couldn’t see a way of just allowing a set of hosts in the
HA (i.e. not the storage nodes), but that’s probably easy to solve.
Hello,
There have been many, many threads about this.
Google is your friend, so is keeping an eye on threads in this ML.
On Mon, 15 Dec 2014 05:44:24 +0100 ceph@panther-it.nl wrote:
I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA
Client1
All Cisco UCS
Just to update this issue.
I stopped OSD.6, removed the PG from disk, and restarted it. Ceph rebuilt
the object and it went to HEALTH_OK.
During the weekend the disk for OSD.6 started giving smart errors and will
be replaced.
Thanks for your help Greg. I've opened a bug report in the tracker.
Yes, setfattr is the preferred way. The docs are here:
http://ceph.com/docs/master/cephfs/file-layouts/
Cheers,
John
On Mon, Dec 15, 2014 at 8:12 AM, Ilya Dryomov ilya.dryo...@inktank.com wrote:
On Sun, Dec 14, 2014 at 10:38 AM, Kevin Shiah agan...@gmail.com wrote:
Hello All,
Does anyone
If you're running Ceph 0.88 or newer, only the rdb pool is created by
default now. Greg Farnum mentioned that the docs are out of date there.
On Sat, Dec 13, 2014 at 8:25 PM, wang lin linw...@hotmail.com wrote:
Hi All
I set up my first ceph cluster according to instructions in
Apologies for re-asking this question since I found several hits on this
question but not very clear answers.
I am in a situation where s3cmd ls seems to work
but s3cmd mb s3://bucket1 does not
1. The rgw dns name = servername in the apache rados.vhost.conf file. and
on the client running the
Hi,
Do I need to overwrite the existing .db files and .txt file in
/var/lib/nssdb on the radosgw host with the ones copied from
/var/ceph/nss on the Juno node ?
Yeah - worth a try (we want to rule out any certificate mis-match errors).
Cheers
Mark
I have manually copied the keys
Have you created the * DNS record?
bucket1.rgw dns name needs to resolve to that IP address (that's what
you're saying in the host_bucket directive).
On Mon, Dec 15, 2014 at 5:52 AM, Ruchika Kharwar saltrib...@gmail.com
wrote:
Apologies for re-asking this question since I found several hits on
On 15/12/14 17:44, ceph@panther-it.nl wrote:
I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA
Having 1 node different from the rest is not going to help...you will
probably get better results if you sprinkle the SSD through all 3 nodes
and use SATA for osd
Hi,
I am running a 3-node Deis cluster with ceph as underlying FS. So it is
ceph running inside Docker containers running in three separate servers.
I rebooted all three nodes (almost at once). After rebooted, the ceph
monitor refuse to connect to each other.
Symptoms are:
- no quorum
Hi Mark
Thank you!
I create a data pool as you want said, it works.
By the way, only adding a metadata server by command ceph-deploy mds
create node1 still doesn't create the metadata or data pool, right?
Thanks
Lin, Wang
Date: Sun, 14 Dec 2014 18:52:20 +1300
From:
Hey there,
I've set up a small VirtualBox cluster of Ceph VMs. I have one
ceph-admin0 node, and three ceph0,ceph1,ceph2 nodes for a total of 4.
I've been following this guide:
http://ceph.com/docs/master/start/quick-ceph-deploy/ to the letter.
At the end of the guide, it calls for you to run
Hi all!
We have an annoying problem - when we launch intensive reading with rbd, the
client, to which mounted image, hangs in this state:
Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00
Hi,
Now I am using cephfs whith mds. I mounted cephfs through ceph-fuse. It
worked well until yesterday when I add some new osds and hosst to the
cluster. After that I can’t user cephfs any more .
It shows that when I check it whith “ceph �Cs”:
cluster
Or Nagios
Thanks,
Denish Patel
On Dec 12, 2014, at 5:38 AM, Thomas Foster thomas.foste...@gmail.com wrote:
You can also try Sensu..
On Dec 12, 2014 1:05 AM, pragya jain prag_2...@yahoo.co.in wrote:
hello sir!
According to TomiTakussaari/riak_zabbix
Currently supported Zabbix keys:
Try lowering filestore max sync interval and filestore min sync
interval. It looks like during the hanged period data is flushed from
some overly big buffer.
If this does not help you can monitor perf stats on OSDs to see if some
queue is unusually large.
--
Tomasz Kuzemko
Hello -
Can anyone help me locate the Debian-type source packages for radosgw-agent?
Thanks,Lakshmi.
On Monday, December 8, 2014 6:10 AM, lakshmi k s lux...@yahoo.com wrote:
Hello Sage -
Just wondering if you are the module owner for radosgw-agent? If so, can you
please help me to
On Thu, Dec 11, 2014 at 7:57 PM, reistlin87 79026480...@yandex.ru wrote:
Hi all!
We have an annoying problem - when we launch intensive reading with rbd, the
client, to which mounted image, hangs in this state:
Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz
On Mon, Dec 15, 2014 at 4:11 PM, Tomasz Kuzemko tomasz.kuze...@ovh.net wrote:
Try lowering filestore max sync interval and filestore min sync
interval. It looks like during the hanged period data is flushed from
some overly big buffer.
If this does not help you can monitor perf stats on OSDs
Ilya Dryomov ilya.dryo...@inktank.com hat am 12. Dezember 2014 um 18:00
geschrieben:
Just a note, discard support went into 3.18, which was released a few
days ago.
I recently compiled 3.18 on Debian 7 and what do I have to say... It works
perfectly well. The used memory goes up and down
At the moment I am a bit confused about how to configure my journals and where.
I will start my first Ceph-experience with a small home cluster made of two
nodes. Both nodes will get around three to five harddisks and one ssd each. The
harddisks are XFS formated and each one represents an OSD. The
Hi Guys,
in
https://github.com/ceph/ceph-deploy/blob/master/ceph_deploy/tests/unit/hosts/test_centos.py
RHEL6.6, which was released on Oct 14th this year is missing:
params= {
'test_repository_url_part': [
dict(distro=CentOS Linux, release='4.3', codename=Foo, output='el6'),
dict(distro=CentOS
Hi Guys,
I am trying to install giant with puppet-cephdeploy but it fails at
ceph-deploy gatherkeys NODEs. There are no keys generated.
This is my output of cephdeploy:
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][DEBUG ] Checking ceph-203-1-public for
We tried default configuration without additional parameters, but it still hangs
How can we see a OSD queue?
15.12.2014, 16:11, Tomasz Kuzemko tomasz.kuze...@ovh.net:
Try lowering filestore max sync interval and filestore min sync
interval. It looks like during the hanged period data is
No, in dmesg is nothing about hangs
Here is the versions of software:
root@ceph-esx-conv03-001:~# uname -a
Linux ceph-esx-conv03-001 3.17.0-ceph #1 SMP Sun Oct 5 19:47:51 UTC 2014 x86_64
x86_64 x86_64 GNU/Linux
root@ceph-esx-conv03-001:~# ceph --version
ceph version 0.87
There's the 'radosgw-agent' package for debian, e.g., here:
http://ceph.com/debian-giant/pool/main/r/radosgw-agent/radosgw-agent_1.2-1~bpo70+1_all.deb
On Mon, Dec 15, 2014 at 5:12 AM, lakshmi k s lux...@yahoo.com wrote:
Hello -
Can anyone help me locate the Debian-type source packages for
On Mon, Dec 15, 2014 at 7:05 PM, reistlin87 79026480...@yandex.ru wrote:
No, in dmesg is nothing about hangs
Not necessarily about hangs. socket closed messages? Can you
pastebin the entire kernel log for me?
Here is the versions of software:
root@ceph-esx-conv03-001:~# uname -a
Linux
Hello,
I've been working to upgrade the hardware on a semi-production ceph cluster,
following the instructions for OSD removal from
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual.
Basically, I've added the new hosts to the cluster and now I'm removing the
Thanks Yehuda. But the link seems to be pointing to Debian binaries. Can you
please point me to source packages?
Regards,Lakshmi.
On Monday, December 15, 2014 8:16 AM, Yehuda Sadeh yeh...@redhat.com
wrote:
There's the 'radosgw-agent' package for debian, e.g., here:
Hi Benjamin,
On 15.12.2014 03:31, Benjamin wrote:
Hey there,
I've set up a small VirtualBox cluster of Ceph VMs. I have one
ceph-admin0 node, and three ceph0,ceph1,ceph2 nodes for a total of 4.
I've been following this
guide: http://ceph.com/docs/master/start/quick-ceph-deploy/ to the
I'm going through something similar, and it seems like the double backfill
you're experiencing is about par for the course. According to the CERN
presentation (http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern slide
19), doing a 'ceph osd crush rm osd ID' should save the double
Thanks - I was suspecting it. I was thinking at a course of action that would
allow setting the weight of an entire host to zero in the crush map - thus
forcing the migration of the data out of the OSDs of that host, followed by the
crush and osd removal, one by one (hopefully this time without
Hi,
I’m buying several servers to test CEPH and I would like to configure journal
on SSD drives (maybe it’s not necessary for all use cases)
Could you help me to identify number of SSD I need (SSD are very expensive and
GB price business case killer… ) ? I don’t want to experience SSD
Hi Florent,
Journals don’t need to be very big, 5-10GB per OSD would normally be ample. The
key is that you get a SSD with high write endurance, this makes the Intel S3700
drives perfect for journal use.
In terms of how many OSD’s you can run per SSD, depends purely on how important
Salut,
The general recommended ratio (for me at least) is 3 journals per SSD. Using
200GB Intel DC S3700 is great.
If you’re going with a low perf scenario I don’t think you should bother buying
SSD, just remove them from the picture and do 12 SATA 7.2K 4TB.
For medium and medium ++ perf using
J-P Methot jpmet...@gtcomm.net hat am 15. Dezember 2014 um 16:05
geschrieben:
I must admit, I have a bit of difficulty understanding your diagram.
I had the illusion that a cache tier also has a journal but it has not. Sounds
less complex now.
But the XFS journals on the devices (as they are
Thanks all
I will probably have 2x10gb : 1x10gb for client and 1x10gb for cluster but I
take in charge your recommendation Sebastien
The 200GB SSD will probably give me around 500MB/s sequential bandwidth. So
with only 2 SSD I can overload 1x 10gb network.
Hum I will take care of osd density
Hi all!
I have a single CEPH node which has two network interfaces.
One is configured to be accessed directly by the internet (153.*) and
the other one is configured on an internal LAN (192.*)
For the moment radosgw is listening on the external (internet)
interface.
Can I configure
Thx a lot Yehuda!
This one with tilde seems to be working!
Fingers crossed that it will continue in the future :-)
Warmest regards,
George
In any case, I pushed earlier today another fix to the same branch
that replaces the slash with a tilde. Let me know if that one works
for you.
I found something interesting.
On the S3 client in the .s3cfg I made these changes:
host_base = 100.100.0.20i.e. IP address of the radsogw server
host_base = cephadmin.com
In the /etc/dnsmasq.conf on the same client I added these lines
address=/cephadmin.com/100.100.0.20
On 12/13/2014 09:39 AM, Jake Young wrote:
On Friday, December 12, 2014, Mike Christie mchri...@redhat.com
mailto:mchri...@redhat.com wrote:
On 12/11/2014 11:39 AM, ano nym wrote:
there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a
msa70 which gives me
Test Msg, at request of list owner
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Last one, sorry
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I installed ceph on 3 nodes, having one monitor, and one OSD running on
each node. After rebooting them all at once (I see this may be a bad
move now), the ceph monitors refuse to connect to each other.
When I run:
ceph mon getmap -o /etc/ceph/monmap
or even
ceph -s
It only shows the
That shouldn't be a problem. Just have Apache bind to all interfaces
instead of the external IP.
In my case, I only have Apache bound to the internal interface. My load
balancer has an external and internal IP, and I'm able to talk to it on
both interfaces.
On Mon, Dec 15, 2014 at 2:00 PM,
I was going with a low perf scenario, and I still ended up adding SSDs.
Everything was fine in my 3 node cluster, until I wanted to add more nodes.
Admittedly, I was a bit aggressive with the expansion. I added a whole
node at once, rather than one or two disks at a time. Still, I wasn't
On Sun, Dec 14, 2014 at 6:31 PM, Benjamin zor...@gmail.com wrote:
The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
disk. They have between 10% and 30% disk utilization but common between all
of them is that they *have free disk space* meaning I have no idea what
the heck
Aha, excellent suggestion! I'll try that as soon as I get back, thank you.
- B
On Dec 15, 2014 5:06 PM, Craig Lewis cle...@centraldesktop.com wrote:
On Sun, Dec 14, 2014 at 6:31 PM, Benjamin zor...@gmail.com wrote:
The machines each have Ubuntu 14.04 64-bit, with 1GB of RAM and 8GB of
disk.
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm finding snapshot restores to be very slow. With a small vm, I can
take a snapshot withing seconds, but restores can take over 15
minutes, sometimes nearly an hou, depending on how I have tweaked
ceph.
The same vm as a QCOW2 image on NFS or native disk can be restored in
under 30 seconds.
Is
Last one, sorry
--
Lindsay Mathieson | Senior Developer
Softlog Australia
43 Kedron Park Road, Wooloowin, QLD, 4030
[T] +61 7 3632 8804 | [F] +61 1800-818-914| [W] softlog.com.au
DISCLAIMER: This Email and any attachments are a confidential communication
intended exclusively for the
Hello,
On Mon, 15 Dec 2014 09:23:23 +0100 Josef Johansson wrote:
Hi Christian,
We’re using Proxmox that has support for HA, they do it per-vm.
We’re doing it manually right now though, because we like it :).
When I looked at it I couldn’t see a way of just allowing a set of hosts
in
Hello,
your subject is misleading, as this is not really related to Deis/Docker.
Find the very recent Is mon initial members used after the first quorum?
thread in this ML.
In short, list all your 3 mons in the initial members section.
And yes, rebooting things all at the same time can be
Hello,
On Mon, 15 Dec 2014 22:43:14 +0100 Florent MONTHEL wrote:
Thanks all
I will probably have 2x10gb : 1x10gb for client and 1x10gb for cluster
but I take in charge your recommendation Sebastien
The 200GB SSD will probably give me around 500MB/s sequential bandwidth.
Intel DC S3700
I increased the OSDs to 10.5GB each and now I have a different issue...
cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data} testfile.txt
cephy@ceph-admin0:~/ceph-cluster$ rados put test-object-1 testfile.txt
--pool=data
error opening pool data: (2) No such file or directory
Hi,
On 16 Dec 2014, at 05:00, Christian Balzer ch...@gol.com wrote:
Hello,
On Mon, 15 Dec 2014 09:23:23 +0100 Josef Johansson wrote:
Hi Christian,
We’re using Proxmox that has support for HA, they do it per-vm.
We’re doing it manually right now though, because we like it :).
Hi,
see here:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg15546.html
Udo
On 16.12.2014 05:39, Benjamin wrote:
I increased the OSDs to 10.5GB each and now I have a different issue...
cephy@ceph-admin0:~/ceph-cluster$ echo {Test-data} testfile.txt
Hi Udo,
Thanks! Creating the MDS did not add a data and metadata pool for me but I
was able to simply create them myself.
The tutorials also suggest you make new pools, cephfs_data and
cephfs_metadata - would simply using data and metadata work better?
- B
On Mon, Dec 15, 2014, 10:37 PM Udo
If you are trying to see if your mails come through, don't check on the
list. You have a gmail account, gmail removes mails that you have sent
yourself.
You can check the archives to see.
And your mails did come on the list.
--
Lindsay
___
Hi,
I am integrating ceph firefly radosgw with openstack juno keystone, the
operating system used on the ceph
nodes and on the openstack node is Ubuntu 14.04.
I am able to create containers and upload files using the swift client to
ceph.
But when I try to download files, I am getting the
Vivek,
The problem is swift client is only downloading a chunk of object not
the whole object so the etag mismatch. Could you paste the value of
'rgw_max_chunk_size'. Please be sure you set this to a sane
value(4MB, atleast for Giant release this works below this value).
On Tue, Dec 16, 2014
15.12.2014 23:45, Sebastien Han пишет:
Salut,
The general recommended ratio (for me at least) is 3 journals per SSD. Using
200GB Intel DC S3700 is great.
If you’re going with a low perf scenario I don’t think you should bother
buying SSD, just remove them from the picture and do 12 SATA
Hi,
I want to test ceph cache tire. The test cluster has three machines, each
has a ssd and a sata. I've created a crush rule ssd_ruleset to place ssdpool
on ssd osd, but cannot assign pgs to ssds.
root@ceph10:~# ceph osd crush rule list
[
replicated_ruleset,
ssd_ruleset]
root@ceph10:~#
Hallo Mike,
This is also have another way.
* for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to
each node.
* make tier1 read-write cache on SSDs
* also you can add journal partition on them if you wish - then data
will moving from SSD to SSD before let down on HDD
* on HDD
hi
When i execute ceph-deploy osd prepare node3:/dev/sdb,always come out
err like this :
[node3][WARNIN] INFO:ceph-disk:Running command: /bin/umount --
/var/lib/ceph/tmp/mnt.u2KXW3
[node3][WARNIN] umount: /var/lib/ceph/tmp/mnt.u2KXW3: target is busy.
Then i execute /bin/umount --
67 matches
Mail list logo