than the
> intended
> recipient(s) is prohibited. If you receive this e-mail in error, please
> notify the sender
> by phone or email immediately and delete it!
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/tec
t; ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cg
>
> plato.xie
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Mi
a
>
> voice: 831-656-6238
>
>
>
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron S
provide some way to cancel requests (at least from the
> client's aspect), that would guarantee that buffers are not going to
> be used (and no completion callback is going to be called).
is the client/consumer cancellation async wrt completion? a cancellation in
that case could ensure that, i
Hi,
That's true, sure. We hope to support async mounts and more normal workflows
in future, but those are important caveats. Editing objects in place doesn't
work with RGW NFS.
Matt
- Original Message -
> From: "Gregory Farnum" <gfar...@redhat.com>
> To: &
-devel. So far, there have not been
> replies with a way to accomplish this. Thank you.
>
>
>
> -Jon
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ce
t;>
>> Cheers,
>> Valery
>>
>> --
>> SWITCH
>> Valéry Tschopp, Software Engineer
>> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
>> email: valery.tsch...@switch.ch phone: +41 44 268 1544
>>
>> 30 years of pioneering the Swiss Inter
land with number 2742969, whose registered
> office is 215 Euston Road, London, NW1 2BE.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suit
f this even
> works. Thanks for any help,
>
> Josef
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michig
ecret>;
> Access_Key_Id = "";
> Secret_Access_Key = "";
> }
>
> RGW {
> cluster = "ceph";
> name = "client.radosgw.radosgw-s2";
> ceph_conf = "/etc/ceph/ceph.conf";
>
; Cheers,
> Syed
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Mi
ph_rgw.example.com:8080
>
>
> I get access denied,
> then I try with the ldap key and I get the same problem.
> I created a local user out of curiosity and I put in s3cmd acess and secret
> and I could create a bucket. What am I doing wrong?
>
> __
egards,
>>
>> -Brent
>>
>>
>>
>> Existing Clusters:
>>
>> Test: Luminous 12.2.7 with 3 osd servers, 1 mon/man, 1 gateway ( all
>> virtual
>> )
>>
>> US Production: Firefly with 4 osd servers, 3 mons, 3 gateways behind
>>
confidential or exempt from disclosure under applicable law. If you are not
>>>> the intended recipient, any disclosure, distribution or other use of this
>>>> e-mail message or attachments is prohibited. If you have received this
>>
s mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-
___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> ___
>> ceph-users mailing
> "max_marker": "0#",
>
> "usage": {
>
> "rgw.none": {
>
> "size_kb": 0,
>
> "size_kb_actual": 0,
>
> "num_objects": 0
>
> },
>
> &qu
om/uniqueid>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>
>>>
>>>
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
t
opening a bug.
>
> Thank you.
>
> --
> Senior Software Engineer Red Hat Storage, Ann Arbor, MI, US
> IRC: Aemerson@OFTC, Actinic@Freenode
> 0x80F7544B90EDBFB9 E707 86BA 0C1B 62CC 152C 7C12 80F7 544B 90ED BFB9
>
1/ 5 osd
>>0/ 5 optracker
>>0/ 5 objclass
>>1/ 3 filestore
>>1/ 3 journal
>>0/ 5 ms
>>1/ 5 mon
>>0/10 monc
>>1/ 5 paxos
>>0/ 5 tp
>>1/ 5 auth
>>1/ 5 crypto
>>1/ 1 finisher
>>
bjects total, with 32 index shards on a
>> replicated ssd pool. It shouldn't be taking this long but I can't
>> imagine what could be causing this. I haven't found any others behaving
>> this way. I'd think it has to be some problem with the bucket index, but
>> what...?
>
king at the physical
> storage.
>
> Any ideas where to look next?
>
> thanks for all the help.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street,
h/ceph/pull/23994
>
> Sooo... bit complicated, fix still pending.
>
> Cheers,
> Florian
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, In
think.)
>
> Cheers
> Florian
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Sui
s not possible to allow testuser to only write in folder2?
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street,
> I would assume then that unlike what documentation says, it's safe to
> > > run 'reshard stale-instances rm' on a multi-site setup.
> > >
> > > However it is quite telling if the author of this feature doesn't
> > > trust what they have written to work correctly.
> > &g
.
>
> ?
>
>
> Thanks,
>
> k
>
> [1] https://tracker.ceph.com/issues/24011
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
Yes, sorry to misstate that. I was conflating with lifecycle
configuration support.
Matt
On Thu, Mar 14, 2019 at 10:06 AM Konstantin Shalygin wrote:
>
> On 3/14/19 8:58 PM, Matt Benjamin wrote:
> > Sorry, object tagging. There's a bucket tagging question in another thread
> &g
Sorry, object tagging. There's a bucket tagging question in another thread :)
Matt
On Thu, Mar 14, 2019 at 9:58 AM Matt Benjamin wrote:
>
> Hi Konstantin,
>
> Luminous does not support bucket tagging--although I've done Luminous
> backports for downstream use, and would be
il message
> or attachments is prohibited. If you have received this e-mail message in
> error, please delete and notify the sender immediately. Thank you.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http:
log this in stdout of the daemon "radosgw"?
>
> It seems to me impossible to put ops log in the stdout of the "radosgw"
> process (or, if it's possible, I have not found). So I have made a
> workaround. I have set:
>
> rgw_ops_log_socket_path = /var/run/ceph/rgw-o
ts_per_shard": 55672,
>
> "fill_status": "OK"
>
> },
>
>
>
>
>
> We ‘realy don’t know who to solve that , looks like a timeout or slow
> performance for that bucket.
>
>
>
> Our RGW sect
42f/Volume_Unknown_fbf0ea7a-af96-4dd4-9ad5-dbf6efdeefdc%24/20190430074414/0.cbrevision:get_obj:http
> status=206
> 2019-05-03 15:37:28.959 7f4a68484700 1 == req done req=0x55f2fde20970 op
> status=-104 http_status=206 ==
>
>
> -Mensaje original-
> De: EDH - Manuel Rios Fern
s://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Tue, Jul 9, 2019 at 1:26 PM Matt Benjamin wrote:
>>
>> Hi Harald,
>>
>> Please file a tracker issue, yes. (Deletes do tend to be slow
ling list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-
summaries LinkedList overflows).
>
> Thoughts?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> _______
>
reate this as I have to hit the cluster very hard to get it to start
> lagging.
>
> Thanks, Aaron
>
> > On Apr 12, 2019, at 11:16 AM, Matt Benjamin wrote:
> >
> > Hi Aaron,
> >
> > I don't think that exists currently.
> >
> > Matt
> >
>
ion before sending the email.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> -Original Message-
> From: Matt Benjamin [mailto:m
oblems
> with a multi user environment of RGW and nfs-ganesha.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Stre
ant tnt2 --uid usery --display-name "tnt2-usery" \
--access_key "useryacc" --secret "test456" user create
Remember that to make use of this feature, you need recent librgw and
matching nfs-ganesha. In particular, Ceph should have, among other
changes:
commit 65
42 matches
Mail list logo