Hi,
Is there a way to assign weights to bricks on different servers in a
gluster volume? AFAIK, weights are automatically assigned based on brick
size (found in mailing lists for version 3.6+). However, we have one
server having 3x the size of the majority of bricks (purely distributed
Hi all,
Yesterday we had our weekly meeting at the usual time, the minutes can
be found below or on these URLs:
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-06-10/gluster-meeting.2015-06-10-12.01.html
Minutes (text):
Hi,
I'm running 3.5.3 and am noticed, that there is 3.5.4 out. So I plan to
upgrade and therefore am planing a downtime for some VM-s. As there will be
a downtime, I'd like to ask you a question:
to which version am I safe to upgrade?
3.5.4 is safe OK.
what about 3.6 or 3.7 ? Is it safe to
Hi,
We have a 3.6.3 cluster and all the clients were running the same
version of glusterfs until I accidentally upgraded one of the client
machines (which uses the fuse mount) to 3.7.1 when doing a yum update.
I'd prefer to not mix the versions and don't want to upgrade the lot to
3.7.x yet, so
Sent from Samsung Galaxy S4
On 11 Jun 2015 20:09, Kingsley glus...@gluster.dogwind.com wrote:
Hi,
We have a 3.6.3 cluster and all the clients were running the same
version of glusterfs until I accidentally upgraded one of the client
machines (which uses the fuse mount) to 3.7.1 when doing a
Hi,
The debian 8.1 is released, but I've got still problems installing it as
qemu-kvm guest on glusterfs Replicated storage volume on all of my proxmox
servers. If I chose to raw disk image for virtual HDD, the installation
just takes ages. If I chose to qcow2 format, it stops the installation on
Ah so many information to share and I forgot one more thing:
if I create the VM on Distributed volume and then convert it to template
and clone it to HA volume, things seem to be working fine also.
So there is something wrong during installation process only.
p.s.
I did iperf tests with all of
Usually works:
yum downgrade packagename
Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kingsley
Sent:
Thanks Atin, that's good news.
So , I just to wait for new version.
In fact I don't want upgrade to 3.7.1, but I need to repair my fault servers,
only the new version 3.7.1 in repos. Why can't store multiple versions in
repos? My original version is 3.6.2
-邮件原件-
发件人: Atin Mukherjee
On Wed, Jun 10, 2015 at 3:32 PM, Vijay Bellur vbel...@redhat.com wrote:
On 06/10/2015 03:24 PM, Venky Shankar wrote:
I would like to propose Aravinda (avish...@redhat.com) as a maintainer
for Geo-replication. I can act as a backup maintainer in his absence.
Hope it's not too late for this
My apologies if this has already been answered but my Google foo produced
only one promising thread
http://www.gluster.org/pipermail/gluster-users/2014-October/019115.html
that did not satisfy me completely.
Right now we have two distribute volumes consisting of two bricks each.
Both are quite
Hi all,
My glusterfs pool updated from 3.6.2 to 3.7.1, the node server os is centos
7.1.1503 .
some server work well , that server met glusterd start up problem. anyone can
help me ?
some message below:
[root@gwgfs02 bricks]# systemctl status glusterd
glusterd.service - GlusterFS, a clustered
This is an issue with 3.7.1, rebalance code path in glusterd is broken.
The fix will be released in 3.7.2.
~Atin
On 06/11/2015 12:21 PM, 何亦军 wrote:
Hi all,
My glusterfs pool updated from 3.6.2 to 3.7.1, the node server os is centos
7.1.1503 .
some server work well , that server met
Hi,
core dump when do gluster volume quota foo list --xml, how to fix it?
env:
glusterfs 3.4.5
centos 6.5
log:
# gluster volume quota foo list --xml
?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutput
opRet0/opRet
opErrno115/opErrno
opErrstr/
volQuota
quota
I fonud the some way, may be I can change the repos path in
ovirt-3.5-dependencies.repo
[ovirt-3.5-glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several
petabytes.
It must be coming from another repo. yum list glusterfs should show
you which one.
I always use priorities
(http://wiki.centos.org/PackageManagement/Yum/Priorities) so if I were
doing it, I would set the glusterfs repos with a lower priority number
than everything else to ensure it overrides
Soumya, do you have any other idea of what to check on my side?
Many thanks,
Alessandro
Il giorno 10/giu/2015, alle ore 21:07, Alessandro De Salvo
alessandro.desa...@roma1.infn.it ha scritto:
Hi,
by looking at the connections I also see a strange problem:
# netstat -ltaupn |
CCin ganesha-devel to get more inputs.
In case of ipv6 enabled, only v6 interfaces are used by NFS-Ganesha.
commit - git show 'd7e8f255' , which got added in v2.2 has more details.
# netstat -ltaupn | grep 2049
tcp6 4 0 :::2049 :::*
LISTEN 32080/ganesha.nfsd
Hi,
this was an extract from the old logs, before Soumya's suggestion of
changing the rquota port in the conf file. The new logs are attached
(ganesha-20150611.log.gz) as well as the gstack of the ganesha process
while I was executing the hanging showmount
(ganesha-20150611.gstack.gz).
Thanks
Soumya Koduri [skod...@redhat.com] wrote:
CCin ganesha-devel to get more inputs.
In case of ipv6 enabled, only v6 interfaces are used by NFS-Ganesha.
I am not a network expert but I have seen IPv4 traffic over IPv6 interface
while fixing few things before. This may be normal.
IPv6 can
Soumya Koduri [skod...@redhat.com] wrote:
CCin ganesha-devel to get more inputs.
In case of ipv6 enabled, only v6 interfaces are used by NFS-Ganesha.
I am not a network expert but I have seen IPv4 traffic over IPv6
interface while fixing few things before. This may be normal.
commit - git
21 matches
Mail list logo