I was able to bring a server back online for a short time and perform an
export of the incomplete PGs I originally posted about last week. The
export showed the files it was exporting and then dropped them all to a
PGID.export file. I then SCP'ed the four PGID.export files to a server
where I
It's not simply a zip. I recently went through an incomplete pg incident as
well. I'm not sure why your import is failing, but I do know that much.
Here's a note in slack from our effort to reverse the export. I'm hoping to
explore this a bit more in the next week.
Data frames appear to have the
Hi,
GLIBC 2.25-r9
GCC 6.4.0-r1
When compiling Ceph 12.2.2, the compilation hangs (cc1plus goes into an
infinite loop and never finishes, requiring the process to be killed
manually) while compiling the file
'src/rocksdb/monitoring/statistics.cc'. By forever, I've left it sit
and it ran for
Hi
On Mon, Jan 8, 2018 at 9:27 PM, Jason Dillaman wrote:
> If you are using a pre-created RBD image for this, you will need to
> disable all the image features that krbd doesn't support:
>
> # rbd feature disable dummy01 exclusive-lock,object-map,
> fast-diff,deep-flatten
>
I followed the announcement of Luminous and erasure coding when I
configured my system. Could this be the reason why my pool overloads
when I push to much data at it ?
root@pve:/# ceph osd erasure-code-profile get ec-42-profile
crush-device-class=hdd
crush-failure-domain=osd
crush-root=default
There are some documented issues with bluestore and jemalloc. At the
moment, I would avoid it.
On Jan 13, 2018 5:43 PM, "Marc Roos" wrote:
>
> I was thinking of enabling this jemalloc. Is there a recommended procedure
> for a default centos7 cluster?
>
>
>
>
I regularly read the opposite here, and was thinking of switching to ec. Are
you sure about what is causing your poor results.
http://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
http://ceph.com/geen-categorie/ceph-pool-migration/
___
On Sat, Jan 13, 2018 at 5:58 PM, Traiano Welcome wrote:
> Hi
>
> On Mon, Jan 8, 2018 at 9:27 PM, Jason Dillaman
> wrote:
>
>> If you are using a pre-created RBD image for this, you will need to
>> disable all the image features that krbd doesn't support:
On Sun, Jan 14, 2018 at 4:41 AM, Dyweni - Ceph-Users <6exbab4fy...@dyweni.com>
wrote:
> Hi,
>
> GLIBC 2.25-r9
> GCC 6.4.0-r1
>
> When compiling Ceph 12.2.2, the compilation hangs (cc1plus goes into an
> infinite loop and never finishes, requiring the process to be killed
> manually) while
I was thinking of enabling this jemalloc. Is there a recommended procedure for
a default centos7 cluster?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
didnt test with luminous i am still using kraken.
but for normal RBD workload using EC with cache tier does not give give
good results at all
this pool has been running for few months with real users and while in
testing it seems somehow ok and usable (still slow) in real workset it
is very
You cannot change a pool between EC and replicated. There is no migration
path between the two without creating a new pool. You can't even change
the EC profile of an EC pool after it's created.
On Sat, Jan 13, 2018, 7:00 PM mofta7y wrote:
> didnt test with luminous i am
Hi All,
is there a way to switch a pool that is set to be EC to being replicated
without the need to switch to new pool and migrate data ?
I am getting poor results from EC and want to switch to replicated but i
already have customers on the system .
i using ceph 11
the EC already have cache tier
13 matches
Mail list logo