Ok,
i take your points.
I'll do my best.
Il 13/06/2018 10:14, Lenz Grimmer ha scritto:
On 06/12/2018 07:14 PM, Max Cuttins wrote:
it's a honor to me contribute to the main repo of ceph.
We appreciate you support! Please take a look at
http://docs.ceph.com/docs/master/start/docume
imagine you would need to do
something similar to what is documented here [1].
[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/
On Wed, Jun 13, 2018 at 5:11 AM, Max Cuttins wrote:
I just realize there is a an error:
multipath -r
Jun 13 11:02:27 | rbd0: HDIO_GETGEO fail
devnode "^(rbd)[0-9]*"
}
This solved the error, but. gateway still not updated.
Il 13/06/2018 10:59, Max Cuttins ha scritto:
Hi everybody,
maybe I miss something but multipath is not adding new iscsi gateways.
I have installed 2 gateway and test it on a client.
Everyth
Hi everybody,
maybe I miss something but multipath is not adding new iscsi gateways.
I have installed 2 gateway and test it on a client.
Everything worked fine.
After that I decided to complete install and create a 3rd gateway.
But no one iscsi initiatior client update the number of gateways.
O
een xen servers and ceph osds)
o have librbd (rbd-nbd, ceph-client) workload on every xenserver dom-0
=> better scaling: more xen servers -> better overall performance
o utilize rbd cache (there is nothing comparable in XENServer 7.2
Community)
o use
Hi everybody,
i have a running iSCSI-ceph environment that connect to XenServer 7.2.
I have some dubts and rookie questions about iSCSI.
1) Xen refused to connect to iSCSI gateway since I didn't turn up
multipath on Xen.
To me it's ok. But Is it right say that multipath is much more than just
Thank Jason,
it's a honor to me contribute to the main repo of ceph.
Just a throught, is it wise having DOCS within the software?
Isn't better to move docs to a less sensite repo?
Il 12/06/2018 17:02, Jason Dillaman ha scritto:
On Tue, Jun 12, 2018 at 5:08 AM, Max Cuttins wrot
/06/2018 16:07, Max Cuttins ha scritto:
Thanks!
I just saw it.
I found all the packages with a deep search in the web.
wget
http://ftp.redhat.com/pub/redhat/linux/enterprise/7ComputeNode/en/RHCEPH/SRPMS/python-rtslib-2.1.fb64-0.1.20170301.git3637171.el7cp.src.rpm
wget
http
This is amazing tools.
Just configured iSCSI multipath in likely 5 minuts.
Kudos to all who develop this tools.
It's just simple, clear and colorfull.
Thanks,
Max
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
es/22143
On Mon, Jun 11, 2018 at 8:08 PM, Max Cuttins wrote:
Really? :)
So in this huge-big-mailing list have never installed iSCSI and get these
errors before me.
Wow sounds like I'm a pioneer here.
The installation guide is getting wrong at very very very beginning.
Even if it'
's a pity do this if really those packages are available
somewhere.
http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli-manual-install/
I'm confused.
Any help will be appreciated.
Thanks
Il 10/06/2018 17:07, Max Cuttins ha scritto:
Hi everybody,
i'm following the docum
‘ceph-volume lvm zap --destroy’ on target machine would have
removed lvm mapping.
On Jun 10, 2018, 14:41 +0300, Max Cuttins , wrote:
I solved by myself.
I' writing here my findings to save some working hours to others.
Sound strange that nobody knew this.
The issue is that data is purged but LV
Hi everybody,
i'm following the documentation stepbystep.
However try to install tcmu-runner and other dependencies give me an error:
yum install targetcli python-rtslib tcmu-runner ceph-iscsi-config
ceph-iscsi-cli
Package targetcli-2.1.fb46-4.el7_5.noarch already installed and latest
ve
XX
Do it for all disks.
Now you can run *ceph-deploy osd create* correctly without being
prompted that disk is in use.
Il 06/06/2018 19:41, Max Cuttins ha scritto:
Hi everybody,
I would like to start from zero.
However last time I run the command to purge everything I got an issue.
I'm running a new installation of MIMIC:
#ceph-deploy disk list ceph01
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ]
Hi everybody,
I would like to start from zero.
However last time I run the command to purge everything I got an issue.
I had a complete cleaned up system as expected, but disk was still OSD
and the new installation refused to overwrite disk in use.
The only way to make it work was manually form
Hi Jason,
i really don't want to stress this much than I already did.
But I need to have a clear answer.
Il 28/03/2018 13:36, Jason Dillaman ha scritto:
But I don't think that CentOS7.5 will use the kernel 4.16 ... so you are
telling me that new feature will be backported to the kernel 3.* ?
Il 27/03/2018 13:46, Brad Hubbard ha scritto:
On Tue, Mar 27, 2018 at 9:12 PM, Max Cuttins <mailto:m...@phoenixweb.it>> wrote:
Hi Brad,
that post was mine. I knew it quite well.
That Post was about confirm the fact that minimum requirements
written in the docu
hould be accepted by those who choose to use them). if
you want the bleeding edge then rhel/centos should not be your
platform of choice.
On Tue, Mar 27, 2018 at 7:04 PM, Max Cuttins <mailto:m...@phoenixweb.it>> wrote:
Thanks Jason,
this is exactly what i read around and
been released yet, but it should be released very
soon. After it's released, it usually takes the CentOS team a little
time to put together their matching release. I also suspect that Linux
kernel 4.16 is going to be released in the next week or so as well.
On Sat, Mar 24, 2018 at 7:36 AM, M
ce=hp&q=where+can+i+download+centos+7.5&oq=where+can+i+download+centos+7.5
-Original Message-
From: Max Cuttins [mailto:m...@phoenixweb.it]
Sent: zaterdag 24 maart 2018 12:36
To: ceph-users@lists.ceph.com
Subject: [ceph-users] where is it possible download CentOS 7.5
As stated in
As stated in the documentation, in order to use iSCSI it's needed use
CentOS7.5.
Where can I download it?
Thanks
iSCSI Targets
Traditionally, block-level access to a Ceph storage cluster has been
limited to QEMU and |librbd|, which is a key enabler for adoption within
OpenStack environmen
Hi everybody,
does anybody have used Petasan?
On the website it claim that use Ceph with ready-to-use iSCSI.
Is it something that somebody have try already?
Experience?
Thought?
Reviews?
Dubts?
Pro?
Cons?
Thanks for any thoughts.
Max
___
ceph-users m
Il 06/03/2018 16:23, David Turner ha scritto:
That said, I do like the idea of being able to disable buckets, rbds,
pools, etc so that no client could access them. That is useful for
much more than just data deletion and won't prevent people from
deleting data prematurely.
To me, if nobody ca
Il 06/03/2018 16:15, David Turner ha scritto:
I've never deleted a bucket, pool, etc at the request of a user that
they then wanted back because I force them to go through a process to
have their data deleted. They have to prove to me, and I have to
agree, that they don't need it before I'll d
Il 06/03/2018 11:13, Ronny Aasen ha scritto:
On 06. mars 2018 10:26, Max Cuttins wrote:
Il 05/03/2018 20:17, Gregory Farnum ha scritto:
You're not wrong, and indeed that's why I pushed back on the latest
attempt to make deleting pools even more cumbersome.
But having a "
Il 05/03/2018 20:17, Gregory Farnum ha scritto:
You're not wrong, and indeed that's why I pushed back on the latest
attempt to make deleting pools even more cumbersome.
But having a "trash" concept is also pretty weird. If admins can
override it to just immediately delete the data (if they n
What about using the at command:
ceph osd pool rm --yes-i-really-really-mean-it | at now + 30 days
Regards,
Alex
How do you know that this command is scheduled?
How do you delete the scheduled command if is casted?
This is weird. We need something within CEPH that make you see the
"status
perfect
Il 02/03/2018 19:18, Igor Fedotov ha scritto:
Yes, by default BlueStore reports 1Gb per OSD as used by BlueFS.
On 3/2/2018 8:10 PM, Max Cuttins wrote:
Umh
Taking a look to your computation I think the ratio OSD/Overhead it's
really about 1.1Gb per OSD.
Because I have 9
, mon4
osd: 289 osds: 289 up, 289 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail
On Fri, Mar 2, 2018, 6:25 AM Max Cuttins <mailto:m...@phoenixweb.it>> wrote:
How can I analyze this?
Il 02/03/2018 12:18, Gonzalo Aguila
Il 02/03/2018 13:27, Federico Lucifredi ha scritto:
On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins <mailto:m...@phoenixweb.it>> wrote:
Hi Federico,
Hi Max,
On Feb 28, 2018, at 10:06 AM, Max Cuttins
mailto:m...@phoenixweb.it>> wrote:
How can I analyze this?
Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:
Hi Max,
No that's not normal. 9GB for an empty cluster. Maybe you reserved
some space or you have other service that's taking the space. But It
seems way to much for me.
El 02/03/18 a las 12:09, M
I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|
And maybe I leaved something in the way.
Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins
Hi everybody,
i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: *9510 MB used*, 8038 GB / 8048 GB avail
pgs:
Is this the overhead of
Hi Federico,
Hi Max,
On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote:
This is true, but having something that just works in order to have minimum
compatibility and start to dismiss old disk is something you should think about.
You'll have ages in order to improve and get b
I think this is a good question for everybody: How hard should be delete
a Pool?
We ask to tell the pool twice.
We ask to add "--yes-i-really-really-mean-it"
We ask to add ability to mons to delete the pool (and remove this
ability ASAP after).
... and then somebody of course ask us to restor
require restart' you
can check to see if it took effect or not by asking the daemon what
it's setting is `ceph daemon mon.ceph_node1 config get
mon_allow_pool_delete`.
On Thu, Mar 1, 2018 at 10:41 AM Max Cuttins <mailto:m...@phoenixweb.it>> wrote:
I get:
#ceph daemon
the pool. This works for me without restarting
any services or changing config files.
Regards
Zitat von Ronny Aasen :
On 01. mars 2018 13:04, Max Cuttins wrote:
I was testing IO and I created a bench pool.
But if I tried to delete I get:
Error EPERM: pool deletion is disabled; you must fir
wn or you've to buy someone who could do the
work for you.
It's a long journey but it seems like it finally comes to an end.
On 03/01/2018 01:26 PM, Max Cuttins wrote:
It's obvious that Citrix in not anymore belivable.
However, at least Ceph should have added iSCSI to it's pl
Il 28/02/2018 18:16, David Turner ha scritto:
My thought is that in 4 years you could have migrated to a hypervisor
that will have better performance into ceph than an added iSCSI layer.
I won't deploy VMs for ceph on anything that won't allow librbd to
work. Anything else is added complexity a
Xen by Citrix used to be a very good hypervisor.
However they used very old kernel till the 7.1
The distribution doesn't allow you to add package from yum. So you need
to hack it.
I have helped to develop the installer of the not ufficial plugin:
https://github.com/rposudnevskiy/RBDSR
However
91 42 91
fax. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr
Le 1 mars 2018 à 00:37, Max Cuttins <mailto:m...@phoenixweb.it>> a écrit :
Didn't check at time.
I deployed everything from VM standalone.
The VM was just build up with fresh ne
I was testing IO and I created a bench pool.
But if I tried to delete I get:
Error EPERM: pool deletion is disabled; you must first set the
mon_allow_pool_delete config option to true before you can destroy a
pool
So I run:
ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
Didn't check at time.
I deployed everything from VM standalone.
The VM was just build up with fresh new centOS7.4 using minimal
installation ISO1708.
It's a completly new/fresh/empty system.
Then I run:
yum update -y
yum install wget zip unzip vim pciutils -y
yum install epel-release -y
yum up
Il 28/02/2018 15:19, Jason Dillaman ha scritto:
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
wrote:
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download
K
Sorry for being rude Ross,
I follow Ceph since 2014 waiting for iSCSI support in order to use it
with Xen.
When finally it seemds it was implemented the OS requirements are
irrealistic.
Seems a bad prank. 4 year waiting for this... and still not true support
yet.
Il 28/02/2018 14:11, Marc
46 matches
Mail list logo