Thanks for the answers, guys!
Am I right to assume msgr2 (http://docs.ceph.com/docs/mimic/dev/msgr2/)
will provide encryption between Ceph daemons as well as between clients and
daemons?
Does anybody know if it will be available in Nautilus?
On Fri, Jan 11, 2019 at 8:10 AM Tobias Florek
Hi everyone, I have some questions about encryption in Ceph.
1) Are RBD connections encrypted or is there an option to use encryption
between clients and Ceph? From reading the documentation, I have the
impression that the only option to guarantee encryption in transit is to
force clients to
In a recent thread on the list, I received various important answers to my
questions on hadoop plugin. Maybe this thread will help you.
https://www.spinics.net/lists/ceph-users/msg40790.html
One of the most important answers is about data locality. The last message
lead me to this article.
>
> > Does s3 or swifta (for hadoop or spark) have integrated data-layout APIs
> for
> > local processing data as have cephfs hadoop plugin?
> >
> With s3 and swift you won't have data locality as it was designed for
> public cloud.
> We recommend disable locality based scheduling in Hadoop when
-29 4:19 GMT-02:00 Orit Wasserman <owass...@redhat.com>:
> On Tue, Nov 28, 2017 at 7:26 PM, Aristeu Gil Alves Jr
> <aristeu...@gmail.com> wrote:
> > Greg and Donny,
> >
> > Thanks for the answers. It helped a lot!
> >
> > I just watched the swifta
Greg and Donny,
Thanks for the answers. It helped a lot!
I just watched the swifta presentation and it looks quite good!
Due the lack of updates/development, and the fact that we can choose spark
also, I think maybe swift/swifta with ceph is a good strategy too.
I need to study it more, tho.
Hi.
It's my first post on the list. First of all I have to say I'm new on
hadoop.
We are here a small lab and we have being running cephfs for almost two
years, loading it with large files (4GB to 4TB in size). Our cluster is
with approximately with 400TB with ~75% of usage, and we are planning
Ok, thanks for confirming.
On Thu, Mar 23, 2017 at 7:32 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> Nope. This is a theoretical possibility but would take a lot of code
> change that nobody has embarked upon yet.
> -Greg
> On Wed, Mar 22, 2017 at 2:16 PM Sergio
Hi all,
Is it possible to create a pool where the minimum number of replicas for
the write operation to be confirmed is 2 but the minimum number of replicas
to allow the object to be read is 1?
This would be useful when a pool consists of immutable objects, so we'd
have:
* size 3 (we always keep
We tracked the problem down to the following rsyslog configuration in our
test cluster:
*.* @@:
$ActionExecOnlyWhenPreviousIsSuspended on
& /var/log/failover.log
$ActionExecOnlyWhenPreviousIsSuspended off
It seems that the $ActionExecOnlyWhenPreviousIsSuspended directive doesn't
work well with
an
>
>
> On 27 July 2016 at 20:57, Sergio A. de Carvalho Jr. <scarvalh...@gmail.com
> > wrote:
>
>> In my case, everything else running on the host seems to be okay. I'm
>> wondering if the other problems you see aren't a side-effect of Ceph
>> services running
, even though the logs might not be getting
pushed out to the central syslog servers.
On Wed, Jul 27, 2016 at 4:49 AM, Brad Hubbard <bhubb...@redhat.com> wrote:
> On Tue, Jul 26, 2016 at 03:48:33PM +0100, Sergio A. de Carvalho Jr. wrote:
> > As per my previous messages on the lis
one of my first
> things to check when services are running weirdly.
>
> My failsafe check is to do
>
> # logger "sean test"
>
> and see if it appears in syslog. If it doesn't do it immediately, I have a
> problem
>
> Cheers,
> Sean
>
> On 27 July
chance? I've seen this behaviour before when my central log server is not
> keeping up with messages.
>
> Cheers,
> Sean
>
> On 26 July 2016 at 21:13, Sergio A. de Carvalho Jr. <scarvalh...@gmail.com
> > wrote:
>
>> I left the 4 nodes running overnight and they just craw
As per my previous messages on the list, I was having a strange problem in
my test cluster (Hammer 0.94.6, CentOS 6.5) where my monitors were
literally crawling to a halt, preventing them to ever reach quorum and
causing all sort of problems. As it turned out, to my surprise everything
went back
ck. As time passes, the gap widens and
quickly the logs are over 10 minutes behind the actual time, which explains
why the logs above don't seem to overlap.
On Mon, Jul 25, 2016 at 9:37 PM, Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:
> Awesome, thanks so much, Joao.
>
&g
.de> wrote:
> On 07/25/2016 05:55 PM, Sergio A. de Carvalho Jr. wrote:
>
>> I just forced an NTP updated on all hosts to be sure it's down to clock
>> skew. I also checked that hosts can reach all other hosts on port 6789.
>>
>> I then stopped monitor 0 (60z
the time so I can't see why monitors
would be getting stuck.
On Mon, Jul 25, 2016 at 5:18 PM, Joao Eduardo Luis <j...@suse.de> wrote:
> On 07/25/2016 04:34 PM, Sergio A. de Carvalho Jr. wrote:
>
>> Thanks, Joao.
>>
>> All monitors have the exact same mom map
(cluster) log [INF] :
mon.610wl02 calling new monitor election
I'm curious about the "handle_timecheck drop unexpected msg" message.
On Mon, Jul 25, 2016 at 4:10 PM, Joao Eduardo Luis <j...@suse.de> wrote:
> On 07/25/2016 03:41 PM, Sergio A. de Carvalho Jr. wrote:
>
&
ng both the 4th and
> 5th simultaneously and letting them both vote?
>
> --
> Joshua M. Boniface
> Linux System Ærchitect
> Sigmentation fault. Core dumped.
>
> On 25/07/16 10:41 AM, Sergio A. de Carvalho Jr. wrote:
> > In the logs, there 2 monitors are constantly
] :
mon.60zxl02@1 won leader election with quorum 1,2,4
2016-07-25 14:32:33.440103 7fefdf4ee700 1 mon.60zxl02@1(leader).paxos(paxos
recovering c 1318755..1319319) collect timeout, calling fresh election
On Mon, Jul 25, 2016 at 3:27 PM, Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:
Hi,
I have a cluster of 5 hosts running Ceph 0.94.6 on CentOS 6.5. On each
host, there is 1 monitor and 13 OSDs. We had an issue with the network and
for some reason (which I still don't know why), the servers were restarted.
One host is still down, but the monitors on the 4 remaining servers are
Hi,
Does anybody know what auth capabilities are required to run commands such
as:
ceph daemon osd.0 perf dump
Even with the client.admin user, I can't get it to work:
$ ceph daemon osd.0 perf dump --name client.admin
--keyring=/etc/ceph/ceph.client.admin.keyring
{}
$ ceph auth get
it be the ideal setup? Would it make sense to put
the journals of all 12 OSDs on the same 900GB disk?
Sergio
On Thu, Apr 7, 2016 at 6:03 PM, Mark Nelson <mnel...@redhat.com> wrote:
> Hi Sergio
>
>
> On 04/07/2016 07:00 AM, Sergio A. de Carvalho Jr. wrote:
>
>> Hi all,
ound that co-locating journals does impact
> performance and usually separating them on flash is a good idea. Also not
> sure of your networking setup which can also have significant impact.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *
Hi all,
I've setup a testing/development Ceph cluster consisting of 5 Dell
PowerEdge R720xd servers (256GB RAM, 2x 8-core Xeon E5-2650 @ 2.60 GHz,
dual-port 10Gb Ethernet, 2x 900GB + 12x 4TB disks) running CentOS 6.5 and
Ceph Hammer 0.94.6. All servers use one 900GB disk for the root partition
I was under the impression that ceph-disk activate would take care of
setting OSD weights. In fact, the documentation for adding OSDs, the short
form, only talks about running ceph-disk prepare and activate:
http://ceph.com/docs/master/install/manual-deployment/#adding-osds
This is also how the
Hi,
While creating a Ceph user with a pre-generated key stored in a keyring
file, ceph auth get-or-create doesn't seem to take the keyring file into
account:
# cat /tmp/user1.keyring
[client.user1]
key = AQAuJEpVgLQmJxAAQmFS9a3R7w6EHAOAIU2uVw==
# ceph auth get-or-create -i /tmp/user1.keyring
Greetings
Just a follow up on the resolution of this issue.
Restarting ceph-osd on one of the nodes solved the problem of the
stuck unclean pgs.
Thanks,
JR
On 9/9/2014 2:24 AM, Christian Balzer wrote:
Hello,
On Tue, 09 Sep 2014 01:25:17 -0400 JR wrote:
Greetings
After running
read about 'ceph osd reweight' I'm a bit hesitant to
just run it (I don't want to do anything that impacts this cluster's
stability).
Is there another, better way to equalize the distribution the data on
the osd partitions?
I'm running dumpling.
Thanks much,
JR
--
Your electronic communications
sending it out the door).
On 9/8/2014 12:09 PM, Christian Balzer wrote:
Hello,
On Mon, 08 Sep 2014 11:42:59 -0400 JR wrote:
Greetings all,
I have a small ceph cluster (4 nodes, 2 osds per node) which recently
started showing:
root@ocd45:~# ceph health
HEALTH_WARN 1 near full osd(s
the expected data movement.
Thanks alot!
JR
On 9/8/2014 10:04 PM, Christian Balzer wrote:
Hello,
On Mon, 08 Sep 2014 18:30:07 -0400 JR wrote:
Hi Christian, all,
Having researched this a bit more, it seemed that just doing
ceph osd pool set rbd pg_num 128
ceph osd pool set rbd pgp_num 128
crush weights to 1
I resist doing anything for now in the hopes that someone has something
coherent to say (Christian? ;-)
Thanks
JR
On 9/8/2014 10:37 PM, JR wrote:
Hi Christian,
Ha ...
root@osd45:~# ceph osd pool get rbd pg_num
pg_num: 128
root@osd45:~# ceph osd pool get rbd pgp_num
Greetings,
I've been running dumpling for several months and it seems very stable.
I'm about to spin up a new ceph environment. Would I be advised to
install emperor? Or, since dumpling is solid, just stick with it?
Thanks much,
JR
___
ceph-users
in thinking about ceph being usable in this
scenario, or is ceph really better suited to being an object store and a
provider of blocks for virtual machines?
Also, how will the ceph filesystem help with the above problem when it
becomes available (if it will)?
Thanks much for your time,
JR
35 matches
Mail list logo