Den mån 4 maj 2020 kl 05:14 skrev Void Star Nill :
> One of the use cases (e.g. machine learning workloads) for RBD volumes in
> our production environment is that, users could mount an RBD volume in RW
> mode in a container, write some data to it and later use the same volume in
> RO mode into a
Den sön 3 maj 2020 kl 13:23 skrev Lee, H. (Hurng-Chun) :
> Hello,
>
> We use purely cephfs in out ceph cluster (version 14.2.7). The cephfs
> data is an EC pool (k=4, m=2) with hdd OSDs using bluestore. The
>
EC 4+2 == 50% overhead
>
> What triggered my attention is the discrepency between
Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill <
darren.sooth...@suse.com>:
> If you want the lowest cost per TB then you will be going with larger
> nodes in your cluster but it does mean you minimum cluster size is going to
> be many PB’s in size.
> Now the question is what is the tax
Den tis 21 apr. 2020 kl 07:29 skrev Eric Ivancich :
> Please be certain to read the associated docs in both:
>
> doc/radosgw/orphans.rst
> doc/man/8/rgw-orphan-list.rst
>
> so you understand the limitations and potential pitfalls. Generally this
> tool will be a precursor to a
Den ons 15 apr. 2020 kl 21:01 skrev Mathew Snyder <
mathew.sny...@protonmail.com>:
> I'm running into a problem that I've found around the Internet, but for
> which I'm unable to find a solution:
> $ sudo radosgw-admin user info
> could not fetch user info: no user info saved
>
Den fre 3 apr. 2020 kl 10:20 skrev Micha :
> Thanks for the answer, I hoped that, but I cannot figure out how to
> configure
> it.
>
Now, what I meant, but perhaps didn't write was:
1. Bluestore is default, Filestore was the older version, not default
anymore.
2. Filestore of course used whole
Den fre 3 apr. 2020 kl 10:11 skrev Micha :
> I want to try using object storage with java.
> Is it possible to set up osds with "only" directories as data destination
> (using cephadmin) , instead of whole disks? I have read through much of the
> docu but didn't found how to do it (if it's
Den sön 15 mars 2020 kl 14:06 skrev Виталий Филиппов :
> WAL is 1G (you can allocate 2 to be sure), DB should always be 30G. And
> this doesn't depend on the size of the data partition :-)
>
DB should be either 3, 30 or 300 depending on how much you can spare on the
fast devices. 30 is probably
Den tors 12 mars 2020 kl 18:58 skrev Janek Bevendorff <
janek.bevendo...@uni-weimar.de>:
> Hi Caspar,
>
> NTPd is running, all the nodes have the same time to the second. I don't
> think that is the problem.
>
Mons want < 50ms precision, so "to the second" is a bit too vague perhaps.
--
May
Den tors 5 mars 2020 kl 08:13 skrev Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:
> >> Hrm. We have checksums on the actual OSD data, so it ought to be
> >> possible to add these to the export/import/diff bits so it can be
> >> verified faster.
> >> (Well, barring bugs.)
> >>
> > I
Den tis 3 mars 2020 kl 21:48 skrev Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:
> > You can use a full local export, piped to some hash program (this is
> > what Backurne¹ does) : rbd export - | xxhsum
> > Then, check the hash consistency with the original
>
> Thanks for the suggestion
Den ons 19 feb. 2020 kl 09:42 skrev Jacek Suchenia :
> Hello Wido
>
> Sure, here is a rule:
> -15s3 3.53830 host kw01sv09.sr1.cr1.lab1~s3
> 11s3 3.53830 osd.11
> -17s3 3.53830 host kw01sv10.sr1.cr1.lab1~s3
> 10s3 3.53830
Den fre 14 feb. 2020 kl 23:02 skrev EDH - Manuel Rios <
mrios...@easydatahost.com>:
> Honestly, not having a function to rename bucket from admin rgw-admin is
> like not having a function to copy or move. It is something basic, since if
> not the workarround, it is to create a new bucket and move
>
>
> The problem is that rsync creates and renames files a lot. When doing
> this with small files it can be very heavy for the MDS.
>
Perhaps run rsync with --in-place to prevent it from re-creating partial
files to a temp entity named .dfg45terf.~tmp~ and then renaming it into the
correct
Den tors 6 feb. 2020 kl 15:06 skrev Mario Giammarco :
> Hello,
> if I have a pool with replica 3 what happens when one replica is corrupted?
>
The PG on which this happens will turn from active+clean to
active+inconsistent.
> I suppose ceph detects bad replica using checksums and replace it
>
>
> For object gateway, the performance is got by `swift-bench -t 64` which
> uses 64 threads concurrently. Will the radosgw and http overhead be so
> significant (94.5MB/s to 26MB/s for cluster1) when multiple threads are
> used? Thanks in advance!
>
>
Can't say what it "must" be, but if I log
Den ons 5 feb. 2020 kl 17:27 skrev Vladimir Prokofev :
> Thank you for the insight.
> > If you're using the default options for rocksdb, then the size of L3 will
> be 25GB
> Where this number comes from? Any documentation I can read?
> I want to have a better understanding on how DB size is
Den ons 5 feb. 2020 kl 16:19 skrev quexian da :
> Thanks for your valuable answer!
> Is the write cache specific to ceph? Could you please provide some links
> to the documentation about the write cache? Thanks!
>
>
It is all the possible caches used by ceph, by the device driver, the
filesystem
Den ons 5 feb. 2020 kl 11:14 skrev quexian da :
> Hello,
>
> I'm a beginner on ceph. I set up three ceph clusters on google cloud.
> Cluster1 has three nodes and each node has three disks. Cluster2 has three
> nodes and each node has two disks. Cluster3 has five nodes and each node
> has five
Den mån 3 feb. 2020 kl 08:25 skrev Wido den Hollander :
> > The crash happens, when the osd wants to read from pipe when processing
> > heartbeat. To me it sounds like a networking issue.
>
> It could also be that this OSD is so busy internally with other stuff
> that it doesn't respond to
Den tors 30 jan. 2020 kl 15:29 skrev Adam Boyhan :
> We are looking to role out a all flash Ceph cluster as storage for our
> cloud solution. The OSD's will be on slightly slower Micron 5300 PRO's,
> with WAL/DB on Micron 7300 MAX NVMe's.
> My main concern with Ceph being able to fit the bill is
Den tis 28 jan. 2020 kl 17:34 skrev Zorg :
> Hi,
>
> we are planning to use EC
>
> I have 3 questions about it
>
> 1 / what is the advantage of having more machines than (k + m)? We are
> planning to have 11 nodes and use k=8 and m=3. does it improved
> performance to have more node than K+M? of
Den ons 22 jan. 2020 kl 18:01 skrev Wesley Dillingham :
> After upgrading to Nautilus 14.2.6 from Luminous 12.2.12 we are seeing the
> following behavior on OSDs which were created with "ceph-volume lvm create
> --filestore --osd-id --data --journal "
>
> Upon restart of the server containing
Den ons 22 jan. 2020 kl 16:30 skrev Robert LeBlanc :
> In the last release of Jewel [0] it mentions that omap data can be stored
> in rocksdb instead of leveldb. We are seeing high latencies from compaction
> of leveldb on our Jewel cluster (can't upgrade at this time). I installed
> the latest
Den tors 9 jan. 2020 kl 17:16 skrev Chad W Seys :
> Hi all,
>In the era of Mimic, what are best practices for setting up cephfs on
> a hard drive only cluster?
>Our old cluster which began life in Emperor and has been upgraded
> until now running Mimic. 21 hard drives ranging from 1 to 4
Den tors 28 nov. 2019 kl 16:15 skrev Matthew Vernon :
> Hi,
>
> > I'm pleased to announce after much discussion on the Ceph dev mailing
> > list [0] that the community has formed the Ceph Survey for 2019.
>
> The RGW questions include:
>
> "The largest object stored in gigabytes"
>
> Is there a
Three OSDs, holding the 3 replicas of a PG here are only half-starting, and
hence that single PG gets stuck as "stale+active+clean".
All died of suicide timeout while walking over a huge omap (pool 7
'default.rgw.buckets.index') and would not get the PG 7.b back online
again.
>From the logs,
Den mån 21 okt. 2019 kl 13:15 skrev masud parvez :
> I am trying to install ceph in ubuntu 16.04 by this link
> https://www.supportsages.com/ceph-part-5-ceph-configuration-on-ubuntu/
>
>
It's kind of hard to support someone elses documentation, you should really
have started with contacting them
Den ons 16 okt. 2019 kl 15:43 skrev Daniel Gryniewicz :
> S3 is not a browser friendly protocol. There isn't a way to get
> user-friendly output via the browser alone, you need some form of
> client that speaks the S3 REST protocol. The most commonly used one
> by us is s3cmd, which is a
Den tis 15 okt. 2019 kl 19:40 skrev Nathan Fish :
> I'm not sure exactly what would happen on an inode collision, but I'm
> guessing Bad Things. If my math is correct, a 2^32 inode space will
> have roughly 1 collision per 2^16 entries. As that's only 65536,
> that's not safe at all.
>
Yeah, the
Den tors 10 okt. 2019 kl 15:12 skrev 潘东元 :
> hi all,
> my osd hit suicide timeout.
>
> common/HeartbeatMap.cc: 79: FAILED assert(0 == "hit suicide timeout")
>
> ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
>
> can you give some advice on troubleshooting?
>
It is a very
Den tis 17 sep. 2019 kl 15:15 skrev Alfredo Deza :
> Reviving this old thread.
> * When a release is underway, the repository breaks because syncing
> packages takes hours. The operation is not atomic.
>
Couldn't they be almost atomic?
I believe both "yum" and "apt" would only consider rpms/debs
Den tis 17 sep. 2019 kl 12:52 skrev Yoann Moulin :
> Hello,
>
> >>> Never install packages until there is an announcement.
> >>>
>
> My reaction was not on this specific release but with this sentence : «
> Never install packages until there is an announcement. » And also with
> this one : « If
Den ons 4 sep. 2019 kl 11:41 skrev Amudhan P :
> Hi,
> I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs.
> my ceph health status showing warning.
>
> My current setup:
> 3 OSD node each with a single disk, recently I added one more disk in one
> of the node and ceph cluster
Den fre 30 aug. 2019 kl 10:49 skrev Amudhan P :
> After leaving 12 hours time now cluster status is healthy, but why did it
> take such a long time for backfill?
> How do I fine-tune? if in case of same kind error pop-out again.
>
> The backfilling is taking a while because max_backfills = 1 and
Den tors 29 aug. 2019 kl 16:04 skrev :
> Hi,
> I am new to ceph ... i am trying to increase object file size .. i can
> upload file size upto 128MB .. how can i upload more than 128MB file .
>
> i can upload file using this
> rados --pool z10 put testfile-128M.txt testfile-128M.txt
>
> Thats ok
301 - 336 of 336 matches
Mail list logo