? Something like Ceph Gateway, but without the
specialised client requirement.
Has anyone seen something along these lines or am I being to vague?
Regards,
--
Stuart Longland
Systems Engineer
_ ___
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7
. The latter (spice) was a
little broken for me under Ubuntu, but I've used it on Gentoo without
issue so I figure this is more an Ubuntu thing than a non-RedHat thing.
If Red Hat were to pull an Oracle on us, it wouldn't do any good. Just
look where it got OpenOffice. :-)
Regards,
--
Stuart Longland
CEPH, so that
the XenServer and VMWare's vm disk images are stored in ceph storage
over NFS?
Any reason why you'd go NFS? VMWare is capable of talking iSCSI to a
Ceph node running stgt.
http://stuartl.longlandclan.yi.org/blog/2014/02/25/ceph-and-stgt/
Regards,
--
Stuart Longland
Systems
RAM-only?
I see mention of cache-tiers, but these will be at the wrong end of the
Ethernet cable for my usage: I want the cache on the Ceph clients
themselves not back at the OSDs.
Regards,
--
Stuart Longland
Systems Engineer
_ ___
\ /|_) | T: +61 7 3535 9619
so even if it doesn't explicitly support it.
--
Stuart Longland
Systems Engineer
_ ___
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064 http://www.vrt.com.au
as the OSDs
shouldn't affect things should it?
Regards,
--
Stuart Longland
Contractor
_ ___
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064 http://www.vrt.com.au
to
figure out what needs changing, especially if others are possibly doing
this already.
Regards,
--
Stuart Longland
Systems Engineer
_ ___
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064 http
misconfigured? Anyway, if someone could
fix this, it would be swell.
Nearly every list I've been on[1], Reply sends to the poster.
Reply-List or Reply-All sends to the list (and the poster).
I believe this is the recommended convention.
Regards,
--
Stuart Longland
Systems Engineer
} hosts, then choose an
OSD on each host at random
- emit: dump the objects at those locations
That was the default, and if my understanding is correct, it'll do what
I'm after.
Regards,
--
Stuart Longland
Systems Engineer
_ ___
\ /|_) | T: +61 7 3535 9619
is buggy
with the flashcache set up I'm using it with. Time for experiments
methinks.)
--
Stuart Longland
Contractor
_ ___
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064 http://www.vrt.com.au
hosts? Do
the clients potentially deadlock with ceph-mon, ceph-ods or both?
Apologies if this is covered somewhere, I have been looking on-and-off
over the last few months but haven't spotted much on the topic.
Regards,
--
## -,--. ## Stuart Longland, Software Engineer
. Having Ceph
completely manage storage seems the preferable option.
Regards,
--
## -,--. ## Stuart Longland, Software Engineer
##. : ## : ## 38b Douglas Street
## # ## -'` .#' Milton, QLD, 4064
'#' *' '-. *'http://www.vrt.com.au
S Y S T E M ST: 07 3535
On 17/04/13 10:53, Stuart Longland wrote:
rbd and cephfs (unless you use FUSE) do live in the kernel, but I'm not
sure about ceph-mon and ceph-ods.
Gah... s/ods/osd/g seems I'm having a dyslexic moment this morning. ;-)
--
## -,--. ## Stuart Longland, Software Engineer
Hi all, apologies for the slow reply.
Been flat out lately and so any cluster work has been relegated to the
back-burner. I'm only just starting to get back to it now.
On 06/06/14 01:00, Sage Weil wrote:
On Thu, 5 Jun 2014, Wido den Hollander wrote:
On 06/05/2014 08:59 AM, Stuart Longland
again as a new pool is added some time later.
Is there a way of tuning the number of placement groups without
destroying data?
Regards,
--
_ ___ Stuart Longland - Systems Engineer
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7
r (ceph-all doesn't stop this, oddly enough)
and OSDs, then wait for the recovery to complete before moving onto the
next (final) node.
--
_ ___ Stuart Longland - Systems Engineer
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7 3535 9699
On 05/01/16 07:52, Stuart Longland wrote:
>> I ran into this same issue, and found that a reboot ended up setting the
>> > ownership correctly. If you look at /lib/udev/rules.d/95-ceph-osd.rules
>> > you'll see the magic that makes it happen
> Ahh okay, good-o, so
n
the BIOS to switch between UEFI and legacy mode, and UEFI is required
for booting GPT.
--
_ ___ Stuart Longland - Systems Engineer
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD
to allow for an additional EFI boot partition,
possibly changes to boot firmware settings and bootloaders too.
I'll look into a udev rule for our particular case and see how I go.
--
_ ___ Stuart Longland - Systems Engineer
\ /|_) | T: +61 7 3535 9619
On 02/05/16 10:20, Robin H. Johnson wrote:
> On Sun, May 01, 2016 at 08:46:36PM +1000, Stuart Longland wrote:
>> Hi all,
>>
>> This evening I was in the process of deploying a ceph cluster by hand.
>> I did it by hand because to my knowledge, ceph-deploy doesn't suppor
can put up
with a little downtime. If there's data loss though, then no, that's
not good.
--
Stuart Longland
Systems Engineer
_ ___
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064 http://www.vr
ice systemd job for that.
>
> Just place that dir with file in same location on OSD hosts and you'll
> be able to activate OSDs.
Yeah, in my case the OSD hosts are the MON hosts, and there was no such
file or directory created on any of them. Monitors were running at the
time.
--
Stuart L
seems to be happy enough now, but some notes on how one
generates the OSD activation keys to use with `ceph-disk activate` would
be a big help.
Regards,
--
Stuart Longland
Systems Engineer
_ ___
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas StreetF: +
On 31/07/17 19:10, Wido den Hollander wrote:
>
>> Op 30 juli 2017 om 2:42 schreef Stuart Longland <stua...@longlandclan.id.au>:
>> As a result, I see messages like this from clients:
>>> oneadmin@opennebula:~$ rados df --id libvirt
>>> 2017-07-30 09:58:32
6 and support both simultaneously. Done
that in the past, and it has worked well.
Anyway, is there something I missed or has this been overlooked?
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
signature.asc
Description: OpenPGP
revent data corruption.
I'm not sure what happens to the lock when the client that requested it
dies either… Ideally it should die with the client, but something tells
me it hangs around. Again, the documentation does not say.
Could someone clarify the status of this?
--
Stuart Longland (aka
osd-dir` dance.
I think mounting tmpfs for something that should be persistent is highly
dangerous. Is there some flag I should be using when creating the
BlueStore OSD to avoid that issue?
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewh
boot (so it would bring up all the volume groups) and that there was a
UDEV rule in place to set the ownership on the LVM VGs for Ceph.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
signature.asc
Description: OpenPGP digital signature
_
ied it long term and have any comments about
reliability/performance?
Regards,
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://li
On 18/5/19 11:34 am, huang jun wrote:
> Stuart Longland 于2019年5月18日周六 上午9:26写道:
>>
>> On 16/5/19 8:55 pm, Stuart Longland wrote:
>>> As this is Bluestore, it's not clear what I should do to resolve that,
>>> so I thought I'd "RTFM" before asking here:
&
one get
> 7:581d78de:::rbd_data.b48c7238e1f29.1b34:head -o obj
> Invalid value for object-size: strict_iecstrtoll: illegal prefix (length > 2)
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed u
On 16/5/19 8:55 pm, Stuart Longland wrote:
> As this is Bluestore, it's not clear what I should do to resolve that,
> so I thought I'd "RTFM" before asking here:
> http://docs.ceph.com/docs/luminous/rados/operations/pg-repair/
>
> Maybe there's a secret hand-shake my
mesg` though and `smartctl`
does not report any read errors however the disks are getting on 3 years
old now.
I've got one of the other former OSD disks busy doing some self-tests
now to see if that uncovers anything.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it
/rados/operations/pg-repair/
Maybe there's a secret hand-shake my web browser doesn't know about or
maybe the page is written in invisible ink, but that page appears blank
to me.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
..
I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a few
OpenBSD VMs for things like routers between virtual networks.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
___
cep
I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a few
OpenBSD VMs for things like routers between virtual networks.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
___
cep
On 19/7/19 8:21 pm, Stuart Longland wrote:
> I'm now getting about 5MB/sec I/O speeds in my VMs.
>
> I'm contemplating whether I migrate back to using Filestore (on XFS this
> time, since BTRFS appears to be a rude word despite Ceph v10 docs
> suggesting it as a good option), b
few years or
should I persevere with tuning Bluestore to get something that won't be
outperformed by an early 90s PIO mode 0 IDE HDD?
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
___
ceph-users ma
e faster, but
it's hard to know for sure.
Maybe I should try migrating back to Filestore and see if that improves
things?
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
___
ceph-users mailing list
On 23/7/19 9:59 pm, Stuart Longland wrote:
> I'll do some proper measurements once the migration is complete.
A starting point (I accept more rigorous disk storage tests exist):
> virtatomos ~ # hdparm -tT /dev/vdb
>
> /dev/vdb:
> Timing cached reads: 2556 MB in 1.99 seconds =
ct on read performance but could be
quite dangerous if the VM host were to go down immediately after a write
for any reason.
While 60MB/sec is getting respectable, doing so at the cost of data
safety is not something I'm keen on.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lo
could get with a 2TB storage capacity for a reasonable price.
I'm not against moving to Bluestore, however, I think I need to research
it better to understand why the performance I was getting before was so
poor.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's
KVM) I have one Supermicro A1SAi-2750F and one
A2SDi-16C-HLN4F, both with 32GB RAM.
https://hackaday.io/project/10529-solar-powered-cloud-computing
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.
___
ritten objects, that could help.
In this topology though, I my only be using 256GB or 512GB SSDs, so much
less storage on SSDs than the HDDs which likely won't work that well for
tiering (https://ceph.com/planet/ceph-hybrid-storage-tiers/). So it'll
need some planning and home-work. :-)
FileStore
-only setups :)
> https://github.com/ceph/ceph/pull/26909
Sounds like BlueStore may be worth a look once I move to Ceph v14 some
time in the future, which is eventually on my TO-DO list.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a
45 matches
Mail list logo