finishing?
Thanks.
===
Tu Holmes
tu.hol...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey gang,
Some options are just not documented well…
What’s up with:
osd_scrub_chunk_min
osd_scrub_chunk_max
osd_scrub_sleep?
===
Tu Holmes
tu.hol...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
I had this same sort of thing with Hammer.
Looking forward to your results.
Please post your configuration when done.
I am contemplating doing a similar action to resolve my issues and it would be
interesting in knowing your outcome first.
//Tu
On Thu, Apr 28, 2016 at 1:18 PM -0700, "Andr
It can be done.
However, with the node hosting OSDs already has enough work to do and you will
run into performance issues.
It's been, and can be done, but you are better off to not do so.
//Tu
_
From: Edward Huyer
Sent: Friday, April 29, 2016 11:30 AM
Subject
I would start here.
https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide
//Tu
_
From: Michael Ferguson
Sent: Monday, May 2, 2016 12:30 PM
Subject: [ceph-users] Lab Newbie Here: Where do I start?
To:
G’Day All,
I have tw
It's all about resources.
If you have lots of CPU and memory it is completely doable.
If you're using lower specification hardware, it might be a little
difficult.
-Tu
On Fri, May 6, 2016 at 7:11 AM David Turner
wrote:
> There is potential for locking due to hung processes or such when you have
nodes one at a time?
If that's the case, do I need to upgrade the kernels before Jewel or will
it be "ok" enough?
Thoughts?
Any tips greatly appreciated.
//Tu Holmes
//Doer of Things and Stuff
___
ceph-users mailing list
ceph-users
"ceph".
After looking, I saw the only process was the ssh process that I had
connected to the ceph node with.
By logging in as root and running su - ceph, the ceph-deploy runs just fine.
Anyone else noticed something like this?
//Tu Holmes
___
r/lib/ceph/osd/ceph-24
/dev/sdm1 3.7T 1.9T 1.8T 52% /var/lib/ceph/osd/ceph-12
/dev/sdc1 3.7T 1.7T 2.0T 47% /var/lib/ceph/osd/ceph-36
/dev/sdg1 3.7T 1.8T 1.9T 49% /var/lib/ceph/osd/ceph-60
Any ideas as to what could be going on?
//Tu Holmes
___
ceph
That is most likely exactly what my issue is. I must have missed that step.
Thanks.
Will report back.
On Fri, May 13, 2016 at 11:18 AM MailingLists - EWS <
mailingli...@expresswebsystems.com> wrote:
> Did you check the permissions of those directories?
>
>
>
> Part of the steps in the upgrade
7;m not breaking something.
Thanks.
//Tu Holmes
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thank you for the info.
Basically I should just set it to 1.
On Fri, May 13, 2016 at 5:12 PM Gregory Farnum wrote:
> On Fri, May 13, 2016 at 5:02 PM, Tu Holmes wrote:
> > Hello again Cephers... As I'm learning more and breaking more things, I'm
> > finding
cksdb
4/ 5 leveldb
1/ 5 kinetic
1/ 5 fuse
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 1
max_new 1000
log_file /var/log/ceph/ceph-osd.8.log
--- end dump of recent events ---
Now, if I do a zap on the disk and add everything back
on.
> -Sam
>
> On Mon, Jun 6, 2016 at 11:53 AM, Tu Holmes wrote:
> > Hey cephers. I have been following the upgrade documents and I have done
> > everything regarding upgrading the client to the latest version of
> Hammer,
> > then to Jewel.
> >
> > I made s
will make sure
that those are also properly changed.
On Mon, Jun 6, 2016 at 12:12 PM Samuel Just wrote:
> Oh, what was the problem (for posterity)?
> -Sam
>
> On Mon, Jun 6, 2016 at 12:11 PM, Tu Holmes wrote:
> > It totally did and I see what the problem is.
> >
> &
Hey Cephers.
Is there a way to force a fix on this error?
/var/log/ceph/ceph-osd.46.log.2.gz:4845:2016-06-06 22:26:57.322073
7f3569b2a700 -1 log_channel(cluster) log [ERR] : 24.325 shard 20: soid
325/hit_set_24.325_archive_2016-05-17 06:35:28.136171_2016-06-01
14:55:35.910702/head/.ceph-internal/
I made a udev rule for my journal disks. Pull the model from the
Mine looks like this:
$ cat /etc/udev/rules.d/55-ceph-journals.rules
ATTRS{model}=="SDLFNDAR-480G-1H", OWNER="ceph", GROUP="ceph", MODE="660"
I got my model by knowing the disk ID the first time and
$ udevadm info -n /dev/sdj
I have seen this.
Just stop ceph and kill any ssh processes related to it.
I had the same issue, and the fix for me was to enable root login, ssh to
the node as root and run the env DEBIAN_FRONTEND=noninteractive
DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends
install -o
Where are your mon nodes?
Were you mixing mon and OSD together?
Are 2 of the mon nodes down as well?
On Jul 3, 2016 12:53 AM, "Willi Fehler" wrote:
> Hello Sean,
>
> I've powered down 2 nodes. So 6 of 9 OSD are down. But my client can't
> write and read anymore from my Ceph mount. Also 'ceph -s
Willi
Am 03.07.16 um 09:56 schrieb Tu Holmes:
Where are your mon nodes?
Were you mixing mon and OSD together?
Are 2 of the mon nodes down as well?
On Jul 3, 2016 12:53 AM, "Willi Fehler" wrote:
> Hello Sean,
>
> I've powered down 2 nodes. So 6 of 9 OSD are down. But my
I have 12 journals on 1 SSD, but I wouldn't recommend it if you want any
real performance.
I use it on an archive type environment.
On Wed, Jul 6, 2016 at 9:01 PM Goncalo Borges
wrote:
> Hi George...
>
>
> On my latest deployment we have set
>
> # grep journ /etc/ceph/ceph.conf
> osd journal si
I would use the calculator at ceph and just set for "all in one".
http://ceph.com/pgcalc/
On Fri, Jun 30, 2017 at 6:45 AM Riccardo Murri
wrote:
> Hello!
>
> Are there any recommendations for how many PGs to allocate to a CephFS
> meta-data pool?
>
> Assuming a simple case of a cluster with 512
Hey Cephers.
Question for you.
Do you guys use Calamari or an alternative?
If so, why has the installation of Calamari not really gotten much better
recently.
Are you still building the vagrant installers and building packages?
Just wondering what you are all doing.
Thanks.
//Tu
or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and dele
Another tool :
>
> http://openattic.org/
>
> - Mail original -
> De: "Marko Stojanovic"
> À: "Tu Holmes" , "John Petrini" <
> jpetr...@coredial.com>
> Cc: "ceph-users"
> Envoyé: Vendredi 13 Janvier 2017 09:30:16
> Ob
So what's the consensus on CephFS?
Is it ready for prime time or not?
//Tu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I could use either one. I'm just trying to get a feel for how stable the
technology is in general.
On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond
wrote:
> What's your use case? Do you plan on using kernel or fuse clients?
>
> On 16 Jan 2017 23:03, "Tu Holmes" wrote:
&
While I know this seems a silly question, are your monitoring nodes spec'd
the same?
//Tu
On Mon, Jan 23, 2017 at 8:38 AM Matthew Vernon wrote:
> Hi,
>
> We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu
> Xenial). We're seeing both machines freezing (nothing in logs on the
28 matches
Mail list logo