They were some investigations as well around F2FS
(https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt), the last time
I tried to install an OSD dir under f2fs it failed.
I tried to run the OSD on f2fs however ceph-osd mkfs got stuck on a xattr test:
fremovexattr(10,
Hi Matan,
Hadoop on CephFS is part of the regular test suites run on CephFS, so
it should work at least to some extent. Any testing/feedback on this
will be appreciated.
As far as I know, the article you link is the best available documentation.
Cheers,
John
On Fri, Oct 24, 2014 at 8:30 PM,
Hello,
Anyone help me. How can i modify the port of the mon?
And how can i modify the cluster name?
Thanks,
Att.
---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11
On 10/27/2014 03:42 PM, Daniel Takatori Ohara wrote:
Hello,
Anyone help me. How can i modify the port of the mon?
The default port is 6789. Why would you want to change it?
It is possible by changing the monmap, but I'm just trying to understand
the reasoning behind it.
And how can i
On 10/27/2014 03:50 PM, Daniel Takatori Ohara wrote:
Hello Wido,
Thanks for the answer. I new in Ceph, and i have a problem.
I have 2 clusters, but when i execute the command df in clients, i saw
one directory only. With command mount, i saw both clusters.
It could be that they are
Hello,
My company is plaining to build a big Ceph cluster for achieving and
storing data.
By requirements from customer - 70% of capacity is SATA, 30% SSD.
First day data is storing in SSD storage, on next day moving SATA storage.
By now we decide use a SuperMicro's SKU with 72 bays for HDD = 22
On 10/27/2014 04:30 PM, Mike wrote:
Hello,
My company is plaining to build a big Ceph cluster for achieving and
storing data.
By requirements from customer - 70% of capacity is SATA, 30% SSD.
First day data is storing in SSD storage, on next day moving SATA storage.
How are you planning on
Hi,
October 27 2014 5:07 PM, Wido den Hollander w...@42on.com wrote:
On 10/27/2014 04:30 PM, Mike wrote:
Hello,
My company is plaining to build a big Ceph cluster for achieving and
storing data.
By requirements from customer - 70% of capacity is SATA, 30% SSD.
First day data is storing
* /dev/disk/by-id
by-path will change if you connect it to different controller, or
replace your controller with other model, or put it in different pci
slot
On Sat, 25 Oct 2014 17:20:58 +, Scott Laird sc...@sigkill.org
wrote:
You'd be best off using /dev/disk/by-path/ or similar links;
On 10/27/2014 05:32 PM, Dan van der Ster wrote:
Hi,
October 27 2014 5:07 PM, Wido den Hollander w...@42on.com wrote:
On 10/27/2014 04:30 PM, Mike wrote:
Hello,
My company is plaining to build a big Ceph cluster for achieving and
storing data.
By requirements from customer - 70% of
I don't imagine this will ever be a feature. CephFS and RadosGW have
fundamentally different goals and use cases. While I can think up a way to
map from one to the other and back, it would be a very limited and
frustrating experience.
If you're having problems with MDS stability, you're better
My experience is that once you hit this bug, those PGs are gone. I tried
marking the primary OSD OUT, which caused this problem to move to the new
primary OSD. Luckily for me, my affected PGs were using replication state
in the secondary cluster. I ended up deleting the whole pool and
Nice. Thanks all, I'll adjust my scripts to call ceph-deploy using
/dev/disk/by-id for future ODSs.
I tried stopping an existing OSD on another node (which is working -
osd.33 in this case), changing /var/lib/ceph/osd/ceph-33/journal to
point to the same partition using /dev/disk/by-id, and
Double-check that you did it right. Does 'ls -lL
/var/lib/ceph/osd/ceph-33/journal' resolve to a block-special device?
On Mon Oct 27 2014 at 12:12:20 PM Steve Anthony sma...@lehigh.edu wrote:
Nice. Thanks all, I'll adjust my scripts to call ceph-deploy using
/dev/disk/by-id for future ODSs.
Oh, hey look at that. I must have screwed something up before. I thought
it was strange that it didn't work.
Works now, thanks!
-Steve
On 10/27/2014 03:20 PM, Scott Laird wrote:
Double-check that you did it right. Does 'ls -lL
/var/lib/ceph/osd/ceph-33/journal' resolve to a block-special
I've noticed a pretty steep performance degradation when using RBDs with
LIO. I've tried a multitude of configurations to see if there are any
changes in performance and I've only found a few that work (sort of).
Details about the systems being used:
- All network hardware for data is 10gbe,
Hi Chris,
I'm doing something very similar to you, however only in a very early stage of
testing, but I don't seem to see the same problem that you are experiencing.
My setup is as follows:-
1x HP DL360 Server Running ESX 5.1
8x 10K SAS 146GB drives each configured as a RAID 0 and with a
Hi Nick,
Thanks for the response, I'm glad to hear you've got something that
provides reasonable performance, that brings some hope to my situation.
I am using the kernel RBD client.
Using a different OS for the gateway/iSCSI nodes was going to be my next
step. Especially now, seeing that you
Hi Chris,
I am not the expert of LIO but from your result, seems RBD/Ceph works
well(RBD on local system, no iSCSI) and LIO works well(Ramdisk (No RBD) - LIO
target) , and if you change LIO to use other interface (file, loopback) to
play with RBD, it also works well.
So
On Tue, 28 Oct 2014, Chen, Xiaoxi wrote:
Hi Chris,
I am not the expert of LIO but from your result, seems RBD/Ceph
works well(RBD on local system, no iSCSI) and LIO works well(Ramdisk (No
RBD) - LIO target) , and if you change LIO to use other interface (file,
loopback) to play
On Mon, 27 Oct 2014 15:13:30 +0100 Sebastien Han wrote:
They were some investigations as well around F2FS
(https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt), the
last time I tried to install an OSD dir under f2fs it failed. I tried to
run the OSD on f2fs however ceph-osd mkfs got
On Mon, 27 Oct 2014 19:30:23 +0400 Mike wrote:
Hello,
My company is plaining to build a big Ceph cluster for achieving and
storing data.
By requirements from customer - 70% of capacity is SATA, 30% SSD.
First day data is storing in SSD storage, on next day moving SATA
storage.
Lots of
22 matches
Mail list logo