I see that 3.7 has settings for tiering, for the wording I presume
hot/cold SSD tiering.
Is this beta yet? testable? are there any usage docs yet?
Thanks,
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-09/gluster_community_weekly_meeting.2015-12-09-12.00.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-12-09/gluster_community_weekly_meeting.2015-12-09-12.00.txt
Log:
Hi Guys, sorry for the late reply, my attention tends to be somewhat
sporadic due to work and the large number of rescue dogs/cats I care for :)
On 3/12/2015 8:34 PM, Krutika Dhananjay wrote:
We would love to hear from you on what you think of the feature and
where it could be improved.
Hi,
Hi,
I have a newly setup gluster file service used as a shared storage where
the content management system uses it as document root. I have run in to a
performance issue with the gluster/fuse client.
Looking for your thoughts and experience in resolving Gluster performance
issues:
Gluster
Am 09.12.2015 um 14:39 schrieb Lindsay Mathieson:
Udo, it occurs to me that if your VM's were running on #2 & #3 and you
live migrated them to #1 prior to rebooting #2/3, then you would
indeed rapidly get progressive VM corruption.
However it wouldn't be due to the heal process, but rather
Directory is present
# ls -la /var/lib/glusterd/
totale 60
drwxr-xr-x. 13 root root 4096 3 dic 15:34 .
drwxr-xr-x. 25 root root 4096 9 dic 12:40 ..
drwxr-xr-x. 3 root root 4096 24 ott 10:06 bitd
-rw---. 1 root root 66 3 dic 15:34 glusterd.info
drwxr-xr-x. 3 root root 4096 2 dic
Am 08.12.2015 um 07:57 schrieb Krutika Dhananjay:
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=enable
quorum-type=auto
server-quorum-type=server
Perfectly put. I am one of the devs who work on replicate module. You
On 7/12/2015 9:03 PM, Udo Giacomozzi wrote:
All VMs were running on machine #1 - the two other machines (#2 and
#3) were *idle*.
Gluster was fully operating (no healing) when I rebooted machine #2.
For other reasons I had to reboot machines #2 and #3 a few times, but
since all VMs were
Hi,
I have production gluster file service used as a shared storage where the
content management system uses it as document root. I have run in to a
performance issue with the gluster/fuse client.
Looking for your thoughts and experience in resolving Gluster performance
issues:
Gluster
Hi,
# /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py
--volname=VOL_ZIMBRA
Traceback (most recent call last):
File
"/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py",
line 60, in
main()
File
Am 08.12.2015 um 02:59 schrieb Lindsay Mathieson:
Hi Udo, thanks for posting your volume info settings. Please note for
the following, I am not one of the devs, just a user, so unfortunately
I have no authoritative answers :(
I am running a very similar setup - Proxmox 4.0, three nodes, but
On 10/12/2015 3:15 AM, Udo Giacomozzi wrote:
This were the commands executed on node #2 during step 6:
gluster volume add-brick "systems" replica 3
metal1:/data/gluster/systems
gluster volume heal "systems" full # to trigger sync
Then I waited for replication to finish before
Hi Kaleb,
thank you very much for the quick reply
I tried what you suggested, but I got the same error
I tried both
HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local"
VIP_glstr01.carcano.local="192.168.65.250"
VIP_glstr02.carcano.local="192.168.65.251"
as well as
Hi,
I upgraded my setup to gluster 3.7.3. I tested writes by performing
writes through fuse and through libgfapi. Attached are the profiles generated
from fuse and libgfapi. The test programs essentially writes 1 blocks each
of 128K.
[root@santest2 Base]# time ./GlusterFuseTest
On 10/12/2015 8:56 AM, Amye Scavarda wrote:
n the interest of making our documentation usable again, we've gone
through MediaWiki (the old community pages and documentation) and
found out what was left behind and what needed to be moved over to our
Github-based wiki pages. We'll be turning
In the interest of making our documentation usable again, we've gone
through MediaWiki (the old community pages and documentation) and found out
what was left behind and what needed to be moved over to our Github-based
wiki pages. We'll be turning those live on Github this week, before
December
HI Lindsay,
>
BTW, I still get a lot of dead links from the search on readthedocs
>
Can you please share couple of examples?
Because, if the dead links are formed due to recent rearrangement of docs,
it will be vanished soon. . That said, when we moved the documentation to
github based, we kept
Hi Lindsay,
All above mentioned links are broken due to the same reason mentioned in
this thread. The "Features"
directory is moved from this repo and now its part of "gluster-specs". I
am not sure how we can remove the search cache in readthedocs, will look
into this though.
--Humble
On Thu,
On 10/12/15 13:04, Humble Devassy Chirammal wrote:
BTW, I still get a lot of dead links from the search on readthedocs
>
Can you please share couple of examples?
Starting from: http://gluster.readthedocs.org/en/latest, search on
"tier". Gives the following:
On 12/10/2015 02:51 AM, Marco Antonio Carcano wrote:
Hi Kaleb,
thank you very much for the quick reply
I tried what you suggested, but I got the same error
I tried both
HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local"
VIP_glstr01.carcano.local="192.168.65.250"
- Original Message -
> From: "Lindsay Mathieson"
> To: "Krutika Dhananjay" , "Gluster Devel"
> , "gluster-users"
> Sent: Wednesday, December 9, 2015 6:48:40 PM
> Subject: Re: Sharding
Am 09.12.2015 um 17:17 schrieb Joe Julian:
A-1) shut down node #1 (the first that is about to be upgraded)
A-2) remove node #1 from the Proxmox cluster (/pvevm delnode "metal1"/)
A-3) remove node #1 from the Gluster volume/cluster (/gluster volume
remove-brick ... && gluster peer detach
22 matches
Mail list logo