also work for single node.
Just wondering would OSD work on ocfs2 and what would performance
characteristics be.
Any thoughts/experience?
BR,
Ugis Racko
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info
e sse2 ss ht tm
pbe syscall nx lm constant_tsc pebs bts nopl pni dtes64 monitor ds_cpl
cid cx16 xtpr lahf_lm
bogomips: 6400.15
clflush size: 64
cache_alignment : 128
address sizes : 36 bits physical, 48 bits virtual
power management:
Br,
Ugis
Ugis, please provide the output of:
RBD_DEVICE=rbd device name
pvs -o pe_start $RBD_DEVICE
cat /sys/block/$RBD_DEVICE/queue/minimum_io_size
cat /sys/block/$RBD_DEVICE/queue/optimal_io_size
The 'pvs' command will tell you where LVM aligned the start of the data
area (which follows the LVM
something can be done still, or I will have to move several TB
off the LVM :)
Anyway, it does not feel like the problem cause is clear. May be I
need to file a bug if that is relevant, but where to?
Ugis
2013/10/21 Mike Snitzer snit...@redhat.com:
On Mon, Oct 21 2013 at 2:06pm -0400,
Christoph
in future if someone did the same. When you get used to
reliability of multiple copy pools - it is easy to mess up with single
copy pools :)
Ugis
2013/3/12 Ugis ugi...@gmail.com:
Hi,
Last week I have unintentionally created zombie pgs - they do not
exist any more, cannot detect/delete them
2012/12/20 Alex Elder el...@inktank.com:
On 12/19/2012 05:17 PM, Ugis wrote:
Hi all,
I have been struggling to map ceph rbd images for last week, but
constantly get kernel crashes.
What has been done:
Previously we had v0.48 set up as test cluster(4 hosts, 5 osds, 3
mons, 3 mds, custom
and sharing ceph template.
In more distant future ceph could be queryable by SNMP, but first
things first :)
Ugis
2013/1/3 Paul Pettigrew paul.pettig...@mach.com.au:
Happy new year all
Over the past year, our company has written an extensive guide on how we
monitor ceph using Zabbix.
I
anyway.
Ugis
2015-06-06 8:53 GMT+03:00 Ugis ugi...@gmail.com:
Hi,
I had recent problem with flapping hdd and in result I need to delete
broken rbd.
Problem is all operations towards this rbd stuck. I even cannot delete
rbd - it sits on 6% done and I found this line in one of osds logs
, but any way
will do that eventually helps to delete that rbd.
Ugis
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
, but if other policy needed -
that could be specifyable via some recovery variable, like it is now
in recovery section here:
http://dachary.org/loic/ceph-doc/rados/configuration/osd-config-ref/
P.S. no active client I/O currently present.
Best regards,
Ugis
--
To unsubscribe from this list: send
10 matches
Mail list logo