Hello!
On Sat, Sep 19, 2015 at 07:03:35AM +0200, martin wrote:
> Thanks all for the suggestions.
> Our storage nodes have plenty of RAM and their only purpose is to host the
> OSD daemons, so we will not create a swap partition on provisioning.
As an option, You can use swap file on demand. It
Do you have a core file from the crash? If you do and can find out
which pointers are invalid that would help...I think "cct" must be the
broken one, but maybe it's just the Inode* or something.
-Greg
On Mon, Sep 21, 2015 at 2:03 PM, Scottix wrote:
> I was rsyncing files to
I didn't get the core dump.
I set it up now and I'll try to see if I can get it to crash again.
On Mon, Sep 21, 2015 at 3:40 PM Gregory Farnum wrote:
> Do you have a core file from the crash? If you do and can find out
> which pointers are invalid that would help...I think
Openstack Kilo use ceph as the backend storage (nova,cinder and glance),after
enable cache tier for glance pool, take snapshot for instance failed (it seems
generate the snapshot then delete it automatically soon)
If cache tier not suitable for glance ?
Best Regards!
Hello,
i'm facing a problem that mds seems not to start.
I started mds in debug mode "ceph-mds -f -i storage08 --debug_mds 10" which
outputs in the log:
-- cut -
2015-09-21 14:12:14.313534 7ff47983d780 0 ceph version 0.94.3
Follow the instructions here to set up a filesystem:
http://docs.ceph.com/docs/master/cephfs/createfs/
It looks like you haven't done "ceph fs new".
Cheers,
John
On Mon, Sep 21, 2015 at 1:34 PM, Frank, Petric (Petric)
wrote:
> Hello,
>
> i'm facing a problem
Hi all!
A quick question:
We are syncing data over cephfs , and we are seeing messages in our
output like:
mds0: Client client008 failing to respond to capability release
What does this mean? I don't find information about this somewhere else.
We are running ceph 9.0.3
On earlier versions,
On Mon, Sep 21, 2015 at 3:50 PM, Wido den Hollander wrote:
>
>
> On 21-09-15 15:05, SCHAER Frederic wrote:
>> Hi,
>>
>> Forgive the question if the answer is obvious... It's been more than "an
>> hour or so" and eu.ceph.com apparently still hasn't been re-signed or at
>> least
On 21-09-15 13:18, Dan van der Ster wrote:
> On Mon, Sep 21, 2015 at 12:11 PM, Wido den Hollander wrote:
>> You can also change 'straw_calc_version' to 2 in the CRUSHMap.
>
> AFAIK straw_calc_version = 1 is the optimal. straw_calc_version = 2 is
> not defined. See
On 19/09/15 03:28, Sage Weil wrote:
> On Fri, 18 Sep 2015, Alfredo Deza wrote:
>> The new locations are in:
>>
>>
>> http://packages.ceph.com/
>>
>> For debian this would be:
>>
>> http://packages.ceph.com/debian-{release}
>
> Make that download.ceph.com .. the packages url was temporary while we
So it sounds like you've got two different things here:
1) You get a lot of slow operations that show up as warnings.
2) Rarely, you get blocked op warnings that don't seem to go away
until the cluster state changes somehow.
(2) is the interesting one. Since you say the cluster is under heavy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
In my lab cluster I can saturate the disks and I'm not seeing any of
the blocked I/Os from the Ceph side, although the client shows that
I/O stops for a while. I'm not convinced that it is load related.
I was looking through the logs using the
On Mon, Sep 21, 2015 at 7:07 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
>
>
> On Mon, Sep 21, 2015 at 3:02 AM, Wouter De Borger wrote:
>> Thank you for your answer! We will use size=4 and min_size=2, which should
>> do the trick.
>>
>>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ceph-disk (which ceph-deploy uses), uses GPT to partition OSDs so that
they can be automatically started by udev and reference partitions for
journals using unique identifiers. The necessary data to start the
OSD, the auth key, fsid, etc, are stored
Hi,
Since the security notice regarding ceph.com the mirroring system broke.
This meant that eu.ceph.com didn't serve new packages since the whole
download system changed.
I didn't have much time to fix this, but today I resolved it by
installing Varnish [0] on eu.ceph.com
The VCL which is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I think you will be OK, but you should double check on a test cluster.
You should be able to revert the rulesets if the data isn't found.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon,
Hi ceph users,
I am using CephFS for file storage and I have noticed that the data gets
distributed very unevenly across OSDs.
* I have about 90 OSDs across 8 hosts, and 4096 PGs for the cephfs_data
pool with 2 replicas, which is in line with the total PG recommendation if
"Total PGs =
This is usually indicative of the same tracepoint event being included by both
a static and dynamic library. See the following thread regarding this issue
within Ceph when LTTng-ust was first integrated [1]. Since I don't have any
insight into your application, are you somehow linking against
On 21/09/15 16:32, John Spray wrote:
On Mon, Sep 21, 2015 at 2:33 PM, Kenneth Waegeman
wrote:
Hi all!
A quick question:
We are syncing data over cephfs , and we are seeing messages in our output
like:
mds0: Client client008 failing to respond to capability
On Mon, Sep 21, 2015 at 12:11 PM, Wido den Hollander wrote:
> You can also change 'straw_calc_version' to 2 in the CRUSHMap.
AFAIK straw_calc_version = 1 is the optimal. straw_calc_version = 2 is
not defined. See src/crush/builder.c
Cheers, Dan
I have a ceph cluster set with two ruleset, each ruleset will select different
osd, if can change ruleset for a pool online drectly(there have data exist on
the pool) ?
Best Regards!
*
[图像 007]
向毓(Raijin.Xiang)
Hi,
depending on cache mode etc - from what we have also experienced (using
CloudStack) - CEPH snapshot functionality simply stops working in some
cache configuration.
This means, we were also unable to deploy new VMs (base-gold snapshot is
created on CEPH and new data disk which is child of
Am 21.09.2015 um 13:47 schrieb Wido den Hollander:
>
>
> On 21-09-15 13:18, Dan van der Ster wrote:
>> On Mon, Sep 21, 2015 at 12:11 PM, Wido den Hollander wrote:
>>> You can also change 'straw_calc_version' to 2 in the CRUSHMap.
>>
>> AFAIK straw_calc_version = 1 is the
On Sat, Sep 19, 2015 at 7:54 PM, Lindsay Mathieson
wrote:
> I'm getting:
>
> W: GPG error: http://download.ceph.com wheezy Release: The following
> signatures couldn't be verified because the public key is not available:
> NO_PUBKEY E84AC2C0460F3994
>
>
> Trying to
On Mon, Sep 21, 2015 at 2:33 PM, Kenneth Waegeman
wrote:
> Hi all!
>
> A quick question:
> We are syncing data over cephfs , and we are seeing messages in our output
> like:
>
> mds0: Client client008 failing to respond to capability release
>
> What does this mean? I
-Message d'origine-
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de Wido
den Hollander
Envoyé : lundi 21 septembre 2015 15:50
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] Important security noticed regarding release signing
key
On 21-09-15 15:05,
On 21-09-15 15:57, Dan van der Ster wrote:
> On Mon, Sep 21, 2015 at 3:50 PM, Wido den Hollander wrote:
>>
>>
>> On 21-09-15 15:05, SCHAER Frederic wrote:
>>> Hi,
>>>
>>> Forgive the question if the answer is obvious... It's been more than "an
>>> hour or so" and eu.ceph.com
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On Mon, Sep 21, 2015 at 3:02 AM, Wouter De Borger wrote:
> Thank you for your answer! We will use size=4 and min_size=2, which should
> do the trick.
>
> For the monitor issue, we have a third datacenter (with higher latency, but
> that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If I recall, there was a bug or two that was found with cache tiers
and snapshots and were fixed. I hope it is being backported to Hammer.
I don't know if this exactly fixes your issue.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4
Hi,
I'm looking for recommended client local SSD caching startegy on OpenStack
Compute where the backend storage is CEPH cluster. The target is for
reducing Compute to Storage traffic. I have not known that librbd support
local SSD caching. Beside, I'm not sure if block SSD caching of local
On 21-09-15 11:06, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> how can i upgrade / move from straw to straw2? I checked the docs but i
> was unable to find upgrade informations?
>
First make sure that all clients are running librados 0.9, but keep in
mind that any running VMs or processes
Hello John,
that was the info i missed (both - create pools and fs). Works now.
Thank you very much.
Kind regards
Petric
> -Original Message-
> From: John Spray [mailto:jsp...@redhat.com]
> Sent: Montag, 21. September 2015 14:41
> To: Frank, Petric (Petric)
> Cc:
On 21-09-15 15:05, SCHAER Frederic wrote:
> Hi,
>
> Forgive the question if the answer is obvious... It's been more than "an hour
> or so" and eu.ceph.com apparently still hasn't been re-signed or at least
> what I checked wasn't :
>
> # rpm -qp --qf '%{RSAHEADER:pgpsig}'
>
Hi Michael,
I could certainly double the total PG count, but and it probably will
reduce the discrepancies somewhat, but I wonder if it would be all that
different. I could of course be very wrong.
ceph osd dump |grep pool output:
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0
Hello Andras,
Some initial observations and questions:
The total PG recommendation for this cluster would actually be 8192 PGs per the
formula.
Total PG's = (90 * 100) / 2 = 4500
Next power of 2 = 8192.
The result should be rounded up to the nearest power of two. Rounding up is
I was rsyncing files to ceph from an older machine and I ran into a
ceph-fuse crash.
OpenSUSE 12.1, 3.1.10-1.29-desktop
ceph-fuse 0.94.3
The rsync was running for about 48 hours then crashed somewhere along the
way.
I added the log, and can run more if you like, I am not sure how to
reproduce
Am 04.09.2015 um 11:42 schrieb Ramon Marco Navarro:
Good day everyone!
I'm having a problem using aws-java-sdk to connect to Ceph using
radosgw. I am reading a " NOTICE: failed to parse date for auth header"
message in the logs. HTTP_DATE is "Fri, 04 Sep 2015 09:25:33 +00:00",
which is I think
Thank you for your answer! We will use size=4 and min_size=2, which should
do the trick.
For the monitor issue, we have a third datacenter (with higher latency, but
that shouldn't be a problem for the monitors)
We had also considered the locality issue. Our WAN round trip latency is
1.5 ms (now)
Hi,
how can i upgrade / move from straw to straw2? I checked the docs but i
was unable to find upgrade informations?
Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm assuming you mean from the server: you can list the clients of an
MDS by SSHing to the server where it's running and doing "ceph daemon
mds. session ls". This has been in releases since Giant iirc.
Cheers,
John
On Mon, Sep 21, 2015 at 4:24 AM, domain0 wrote:
> hi ,
>
On Fri, Sep 18, 2015 at 6:33 PM, Robert LeBlanc
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Depends on how easy it is to rebuild an OS from scratch. If you have
> something like Puppet or Chef that configure a node completely for
> you, it may not be too
41 matches
Mail list logo