Just a comment, not meaning to hijack the thread
Sometimes when things are working so well, why go past higher. I mean I
suffered with 3.x, 4.x and 5.x with memory leaks in the gluster fuse
mount, as well as other bugs that wroke writing, and after having to reboot
servers every 2-4 weeks
Been using Gluster since the 3.3.x days, been burned a few times and if it
was not for the help of the community (one specific time saved big by Joe
Julian), I would had not continued using it.
My main use was originally as a self hosted engine via NFS and a file
server for windows clients with
I can echo the sentiment, lived with a horrible memory leak on FUSE mounted
glusterfs all the way from some 3.X version (I think it went bad on 3.7,
but cannot remember) through 4.x on a period of 2+ years, where the only
solution was to reboot the servers every 15 to 30 days to clear the memory
In the past there have been problems with using vfs = glusterfs in samba,
specifically failed writes when windows clients try to write to samba
shares that are accessing gluster directly using that vfs plugin. Have you
tried exporting the samba share from a FUSE mount? If that has no write
he following:
>
> gluster v get readdir-ahead
>
> If this is enabled, please disable it and see if it helps. There was a
> leak in the opendir codpath that was fixed in later released.
>
> Regards,
> Nithya
>
>
> On Tue, 30 Jul 2019 at 09:04, Diego Remolina
Will this kill the actual process or simply trigger the dump? Which process
should I kill? The brick process in the system or the fuse mount?
Diego
On Mon, Jul 29, 2019, 23:27 Nithya Balachandran wrote:
>
>
> On Tue, 30 Jul 2019 at 05:44, Diego Remolina wrote:
>
>> Unfor
ssue. Also i see that the cache size is 10G is that something
> you arrived at, after doing some tests? Its relatively higher than normal.
>
> [1]
> https://docs.gluster.org/en/v3/Troubleshooting/statedump/#generate-a-statedump
>
> On Mon, Mar 4, 2019 at 12:23 AM Diego Rem
> the following information to debug further:
> - Gluster volume info output
> - Statedump of the Gluster fuse mount process consuming 44G ram.
>
> Regards,
> Poornima
>
>
> On Sat, Mar 2, 2019, 3:40 AM Diego Remolina wrote:
>
>> I am using glusterfs with two servers
I am using glusterfs with two servers as a file server sharing files via
samba and ctdb. I cannot use samba vfs gluster plugin, due to bug in
current Centos version of samba. So I am mounting via fuse and exporting
the volume to samba from the mount point.
Upon initial boot, the server where
berto.nune...@gmail.com> wrote:
>>
>>> Yep!
>>> But as I mentioned in previously e-mail, even with 3 or 4 servers this
>>> issues occurr.
>>> I don't know what's happen.
>>>
>>> ---
>>> Gilberto Nunes Ferreira
>>>
>
Glusterfs needs quorum, so if you have two servers and one goes down, there
is no quorum, so all writes stop until the server comes back up. You can
add a third server as an arbiter which does not store data in the bricks,
but still uses some minimal space (to keep metadata for the files).
HTH,
The OP (me) has a two node setup. I am not sure how many nodes in Artem's
configuration (he is running 4.0.2).
It can make sense that the more bricks you have, the higher the performance
hit in certain conditions, given that supposedly one of the issues of
gluster with many small files is that
che-invalidation on
Any other volume files were the same.
HTH,
Diego
On Tue, Jan 15, 2019 at 2:04 PM Davide Obbi wrote:
> i think you can find the volume options doing a grep -R option
> /var/lib/glusterd/vols/ and the .vol files show the options
>
> On Tue, Jan 15, 2019 at 2:2
order.
Diego
On Tue, Jan 15, 2019 at 5:03 AM Davide Obbi wrote:
>
>
> On Tue, Jan 15, 2019 at 2:18 AM Diego Remolina wrote:
>
>> Dear all,
>>
>> I was running gluster 3.10.12 on a pair of servers and recently upgraded
>> to 4.1.6. There is a cron job tha
Dear all,
I was running gluster 3.10.12 on a pair of servers and recently upgraded to
4.1.6. There is a cron job that runs nightly in one machine, which rsyncs
the data on the servers over to another machine for backup purposes. The
rsync operation runs on one of the gluster servers, which mounts
howing 4.8.3.
>
>
>
> Thank you!
>
>
>
>
>
> From: gluster-users-boun...@gluster.org
> On Behalf Of Matt Waymack
> Sent: Sunday, December 16, 2018 1:55 PM
> To: Diego Remolina
> Cc: gluster-users@gluster.org List
> Subject: Re: [Gluster-users] Unable to
You have encountered a (if not "the") major flaw in glusterfs, it is
not very good at dealing with lots of small files.
There are some tunables in gluster that may help just a bit, but you
will *not* get the same speeds as raw direct attached storage without
clustering or be even close to it.
Matt,
Can you test the updated samba packages that the CentOS team has built for
FasTrack?
A NOTE has been added to this issue.
--
(0033351) pgreco (developer) - 2018-12-15 13:43
Anoop C S wrote:
>
> On Wed, 2018-11-14 at 22:19 -0500, Diego Remolina wrote:
> > Hi,
> >
> > Please download the logs from:
> >
> > https://www.dropbox.com/s/4k0zvmn4izhjtg7/samba-logs.tar.bz2?dl=0
>
> [2018/11/14 22:01:31.974084, 10, pid=7577, effective(
at 07:50 -0500, Diego Remolina wrote:
> > >
> > > Thanks for explaining the issue.
> > >
> > > I understand that you are experiencing hang while doing some operations
> > > on files/directories in
> > > a
> > > GlusterFS volume share f
>
> Thanks for explaining the issue.
>
> I understand that you are experiencing hang while doing some operations on
> files/directories in a
> GlusterFS volume share from a Windows client. For simplicity can you attach
> the output of following
> command:
>
> # gluster volume info
> # testparm
rage bricks over
> > Infiniband RDMA
> >: or TCP/IP interconnect into one large parallel network file
> >: system. GlusterFS is one of the most sophisticated file
> systems in
> >: terms of features and extensibility. It borrows a power
n 11/9/18 10:19 AM, Diego Remolina wrote:
> > Hmmm yes and no...
>
> Yes and no what?
>
> That samba comes from the CentOS-Base/updates repo.
>
> If you don't use the Storage SIG you will get glusterfs 3.8.4,
> client-side only; also from the CentOS-Base/updates repo.
Hmmm yes and no...
Have had problems with samba 4.7.x from Centos repos and Gluster
3.10.x when using plugin vfs gluster to access shares so I had to use
fuse mounts and point samba to fuse mounts.
I tried an upgrade to glusterfs 4.1.5 a couple days ago and samba
worked with it but still using
Hi,
I have just updated gluster and I am now taking a look at the logs and
I am seeing a lot of entries which are similar. Are these something to
worry about?
The volume seems to be OK:
[root@ysmha02 export]# gluster v heal export info
Brick 10.0.1.7:/bricks/hdds/brick
Status: Connected
Number
You may have a typo, "_netdev" you are missing "_".
Give that a try.
Diego
On Mon, Oct 15, 2018, 15:33 Alfredo De Luca
wrote:
> Hi all.
> I have 3 nodes glusterfs servers and multiple client and as I am a bit
> newbie on this not sure how to setup correctly the clients.
> 1. The clients
Create a non-gluster share in a local drive. Make sure you can access that
share to confirm whether or not the problem is gluster and the vfs plugin
or actually a samba authentication problem.
Is the machine tied to a domain or standalone samba server?
Diego
On Wed, Oct 10, 2018, 06:57
Per: https://www.samba.org/samba/docs/current/man-html/vfs_glusterfs.8.html
Does adding: kernel share modes = no
to smb.conf and restarting samba helps?
FWIW, I have had many recent problems using the samba vfs plugins on
Centos 7.5 (latest) against a 3.10.x glusterfs server. When exporting
via
read, should I start a new one for that issue?
Diego
On Fri, Oct 5, 2018 at 10:20 AM Diego Remolina wrote:
>
> Hi,
>
> Thanks for the reply!
>
> This was setup a few years ago and was working OK, even when falling back to
> this server. We had not failed over to this server r
/3.3/html-single/administration_guide/#sect-SMB_CTDB
Thanks
Diego
On Thu, Oct 4, 2018, 09:16 Poornima Gurusiddaiah
wrote:
>
>
> On Tue, Oct 2, 2018 at 5:26 PM Diego Remolina wrote:
>
>> Dear all,
>>
>> I have a two node setup running on Centos and gluster version
>
Dear all,
I have a two node setup running on Centos and gluster version
glusterfs-3.10.12-1.el7.x86_64
One of my nodes died (motherboard issue). Since I had to continue
being up, I modified the quorum to below 50% to make sure I could
still run on one server.
The server runs ovirt and 2 VMs on
e tasks
>>>>
>>>> [root@glusterp1 gv0]# ls -l glusterp1/images/
>>>> total 2877064
>>>> -rw---. 2 root root 107390828544 May 10 12:18
>>>> centos-server-001.qcow2
>>>> -rw-r--r--. 2 root root0 May
Show us output from: gluster v status
It should be easy to fix. Stop gluster daemon on that node, mount the
brick, start gluster daemon again.
Check: gluster v status
Does it show the brick up?
HTH,
Diego
On Wed, May 9, 2018, 20:01 Thing wrote:
> Hi,
>
> I have 3
I went 3.6.7 to 3.10.3 with no problem. I have not tested 3.12.
You should be able to do do it in one jump, but will need downtime.
Check the previous threads for similar upgrade situations which
contain some detailed procedures.
HTH,
Diego
On Mon, Oct 2, 2017 at 7:52 AM, Roman
I've noticed this as well on the official 3.8.4 gluster packages from Red Hat
# gluster v status
Status of volume: aevmstorage
Gluster process TCP Port RDMA Port Online Pid
--
Brick
Hi Martin,
> Do you mean latest package from Ubuntu repository or latest package from
> Gluster PPA (3.7.20-ubuntu1~xenial1).
> Currently I am using Ubuntu repository package, but want to use PPA for
> upgrade because Ubuntu has old packages of Gluster in repo.
When you switch to PPA, make sure
Procedure looks good.
Remember to back up Gluster config files before update:
/etc/glusterfs
/var/lib/glusterd
If you are *not* on the latest 3.7.x, you are unlikely to be able to go
back to it because PPA only keeps the latest version of each major branch,
so keep that in mind. With Ubuntu,
Not so fast, 3.8.4 is the latest if you are using official RHEL rpms
from Red Hat Gluster Storage, so support for that should go through
your Red Hat subscription. If you are using the community packages,
then yes, you want to update to a more current version.
Seems like the latest is:
wntime,
> which means that there are no clients writing to this gluster volume, as
> the volume is stopped.
>
> But post a upgrade we will still have the data on the gluster volume
> that we had before the upgrade(but with downtime).
>
> - Hemant
>
>
> On 9/13/17 2:33
Nope, not gonna work... I could never go even from 3.6. to 3.7 without
downtime cause of the settings change, see:
http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023470.html
Even when changing options in the older 3.6.x I had installed, my new
3.7.x server would not connect,
I currently only have a Windows 2012 R2 server VM in testing on top of
the gluster storage, so I will have to take some time to provision a
couple Linux VMs with both ext4 and XFS to see what happens on those.
The Windows server VM is OK with killall glusterfsd, but when the 42
second timeout
I would prefer the behavior was different to what it is of I/O stopping.
The argument I heard for the long 42 second time out was that MTBF on a
server was high, and that the client reconnection operation was *costly*.
Those were arguments to *not* change the ping timeout value down from 42
>From my understanding of gluster, that is to be expected since instead
of having to stat a single file without sharding, now you have to stat
multiple files when you shard. Remember that gluster is not so great
at dealing with "lots" of files, so if you have a single 100GB
file/image stored in
; (3.6.x => 3.10.x)?
> Thanks.
>
> On Fri, Aug 25, 2017 at 8:42 PM, Diego Remolina <dijur...@gmail.com> wrote:
>>
>> I was never able to go from 3.6.x to 3.7.x without downtime. Then
>> 3.7.x did not work well for me, so I stuck with 3.6.x until recently.
>> I went
You cannot do a rolling upgrade from 3.6.x to 3.10.x You will need downtime.
Even 3.6 to 3.7 was not possible... see some references to it below:
https://marc.info/?l=gluster-users=145136214452772=2
https://gluster.readthedocs.io/en/latest/release-notes/3.7.1/
# gluster volume set
lusterfs-fuse-3.10.3-1.el7.x86_64
samba-vfs-glusterfs-4.4.4-14.el7_3.x86_64
glusterfs-libs-3.10.3-1.el7.x86_64
glusterfs-cli-3.10.3-1.el7.x86_64
Diego
On Mon, Jun 26, 2017 at 10:14 AM, Diego Remolina <dijur...@gmail.com> wrote:
> Hi,
>
> I was wondering if you have had a chance to r
rsion earlier than 3.10 that we should be looking at as an
> interim step?
>
> Brett
>
>
> From: Diego Remolina <dijur...@gmail.com>
> Sent: Tuesday, August 8, 2017 10:39:27 PM
> To: Brett Randall
> Cc: gluster-users@gluster.org List
> Subjec
I had a mixed experience going from 3.6.6 to 3.10.2 on a two server
setup. I have since upgraded to 3.10.3 but I still have a bad problem
with specific files (see CONS below).
PROS
- Back on a "supported" version.
- Windows roaming profiles (small file performance) improved
significantly via
>
> You should first upgrade servers and then clients. New servers can
> understand old clients, but it is not easy for old servers to understand new
> clients in case it started doing something new.
But isn't that the reason op-version exists? So that regardless of
client/server mix, nobody
Hi,
I was wondering if you have had a chance to review the logs and if you
can tell me know if more information is needed.
Diego
On Mon, Jun 12, 2017 at 8:20 AM, Diego Remolina <dijur...@gmail.com> wrote:
> Did the logs provide any hints as to what the issue may be?
>
> Diego
&g
Did the logs provide any hints as to what the issue may be?
Diego
On Sat, Jun 3, 2017 at 12:16 PM, Diego Remolina <dijur...@gmail.com> wrote:
> Thanks for taking the time to look into this. Since we needed downtime
> due to the gluster update, we also updated the OS, including samb
min,@Managers
writeable = yes
guest ok = no
create mask = 660
directory mask = 770
This test will determine if the issue is the samba vfs gluster plugin
or if it is the fact that the file is stored in the gluster volume.
Any other thoughts?
Diego
On Wed, May 31, 2017 at 12:59
ame: /lib64/libglusterfs.so.0
On Wed, May 31, 2017 at 12:39 PM, Diego Remolina <dijur...@gmail.com> wrote:
> Samba is running in the same machine as glusterd. The machines were
> rebooted after the upgrades and samba has been restarted a few times.
>
> # rpm -qa | grep glust
if you forgot to update Gluster client packages on Samba
>> node?
>>
>> On Wed, May 31, 2017 at 9:04 PM, Diego Remolina <dijur...@gmail.com> wrote:
>>> Please download the log file from this link:
>>>
>>> https://drive.google.com/open?id=0B8EAPWIe4
iles from /var/log/samba.
>
> This might be because of cluster.lookup-optimize. Adding Poornima and
> Raghavendra Gowdappa to help with this.
>
>
> On Wed, May 31, 2017 at 1:03 AM, Diego Remolina <dijur...@gmail.com> wrote:
>> This is a bit puzzling, not sure what diff
Hi,
While we were running 3.6.6 we used to do a nightly rsync to keep a
backup copy of our gluster volume on a separate server. Jobs would
usually start at 9PM and end around 2:00 to 2:30AM consistently every
time.
After upgrading to 3.10.2, the same rsync job (nightly cron) is now
taking an
at 10:57 AM, Diego Remolina <dijur...@gmail.com> wrote:
> This is what I see in the logs set from smb.conf via line ->
> glusterfs:logfile = /var/log/samba/glusterfs-projects.log
>
> [2017-05-30 14:52:31.051524] E [MSGID: 123001]
> [io-cache.c:564:ioc_open_cbk] 0-export
RCH/CrownAcura-SD02-ArchModel.rvt (a97bc9bb-68cf-4a69-
aef7-39766b323c14). Key: glusterfs.get_real_filename:desktop.ini [Not
a directory]
On Tue, May 30, 2017 at 10:37 AM, Diego Remolina <dijur...@gmail.com> wrote:
> Hi,
>
> Over the weekend we updated a two server glusterfs 3.6.6 inst
Hi,
Over the weekend we updated a two server glusterfs 3.6.6 install to
3.10.2 We also updated samba and samba-vfs to the latest in CentOS. I
enabled several of the newer caching features from gluster 3.9 for
small file performance and samba, and we now seem to have some issues
with accessing
Servers now also come with the copper 10Gbit network adapters built in the
motherboard (Dell R730, supermicro, etc). But for those that do not, I have
used the Intel X540-T2 adapters with Centos 7 and RHEL7.
As for switches, our infrastructure uses expensive Cisco 9XXX series and
FEX expanders,
is there, or if it is how do I get those specific ones?
Say I need to go back to 3.6.7, I can locate:
https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+build/8350060
But no packages there.
Diego
On Fri, Aug 19, 2016 at 5:57 AM, Diego Remolina <dijur...@gmail.com> wrote:
> The issue i
ieson" <lindsay.mathie...@gmail.com>
wrote:
> On 19/08/2016 3:45 AM, Diego Remolina wrote:
>
>> The one thing that still remains a mystery to me is how to downgrade
>> glusterfs packages in Ubuntu. I have never been able to do that. There
>> was also a post from someon
The one thing that still remains a mystery to me is how to downgrade
glusterfs packages in Ubuntu. I have never been able to do that. There
was also a post from someone about it recently on the list and I do
not think it got any replies.
Diego
On Thu, Aug 18, 2016 at 1:39 PM, Joe Julian
Probably not really a specific gluster issue, but yes, you can do that
with extended acls.
You will declare spefic execute only permission for that user in all
directories above documentation and in documentation itself it should
be read/execute if you only want him to read or rwx if you want him
Going from 3.6 to 3.7 is very likely not going to happen without down
time. I am speaking from experience. Regardless of all the
recommendations given about the changes for rpc-auth-allowi-insecure
that came in the 3.7.2 to 3.7.3 versions, I was never able to get
things upgraded to >= 3.7.3
I run a few two node glusterfs instances, but always have a third
machine acting as an arbiter. I am with Jeff on this one, better safe
than sorry.
Setting up a 3rd system without bricks to achieve quorum is very easy.
Diego
On Fri, Mar 4, 2016 at 10:40 AM, Jeff Darcy
I actually had this problem with CentOS 7 and glusterfs 3.7.x
I downgraded to 3.6.x and the crashes stopped.
See https://bugzilla.redhat.com/show_bug.cgi?id=1234877
It may be the same issue.
I am still in the old samba-vfs-glusterfs-4.1.12-23.el7_1.x86_64 and
glusterfs-3.6.6-1.el7.x86_64 on
It is also worth noting that if you are using replica=2 you will
usually always want a third node which has no bricks (unless you can
afford a third node and replica=3) to provide quorum. You should then
set your quorum ratio to 51%. This is to avoid split brain situations.
Some RH doc reference
Install and configure gluster, make sure the firewall openings are
proper in all nodes and then run:
gluster peer probe (new node IP)
Then you should see it on a gluster volume status [volname]
# gluster v status export
Status of volume: export
Gluster process
Yes, you need to avoid split brain on a two node replica=2 setup. You
can just add a third node with no bricks which serves as the arbiter
and set quorum to 51%.
If you set quorum to 51% and do not have more than 2 nodes, then when
one goes down all your gluster mounts become unavailable (or is
wrote:
> Now i understand it, thanks
>
> /Peter
>
> -Oprindelig meddelelse-
> Fra: Diego Remolina [mailto:dijur...@gmail.com]
> Sendt: 30. oktober 2015 14:11
> Til: Peter Michael Calum
> Cc: gluster-users@gluster.org
> Emne: Re: [Gluster-users] How-to st
I am running Ovirt and self-hosted engine with additional vms on a
replica two gluster volume. I have an "arbiter" node and set quorum
ratio to 51%. The arbiter node is just another machine with the
glusterfs bits installed that is part of the gluster peers but has no
bricks to it.
You will have
I have not quite had such a bad experience with upgrades (was able to
do all upgrades of the 3.5 and 3.6 series without downtime, except for
the 3.6. to 3.7 upgrade where regardless of adding the
rpc-allow-insecure option would not let the 3.7 node talk to the 3.6).
However, had to recently
On all your shares where you use vfs objects = glusterfs also add the option:
kernel share modes = No
Then restart samba.
Here is one of my example shares:
[Projects]
path = /projects
browseable = yes
write list = @Staff,root,@Admin,@Managers
writeable = yes
guest ok = no
Bump...
Anybody has any clues as to how I can try and identify the cause of
the slowness?
Diego
On Wed, Sep 9, 2015 at 7:42 PM, Diego Remolina <dijur...@gmail.com> wrote:
> Hi,
>
> I am running two glusterfs servers as replicas. I have a 3rd server
> which provides quo
ilesystems that use a
> separate in-memory metadata server. I've tried LizardFS and MooseFS and they
> are both much faster than GlusterFS for small files, although large-file
> sequential performance is not as good (but still plenty for a Samba server).
>
> Alex
>
>
> On 14/0
See below
On Mon, Sep 14, 2015 at 11:06 AM, Ben Turner <btur...@redhat.com> wrote:
> - Original Message -
>> From: "Diego Remolina" <dijur...@gmail.com>
>> To: "Alex Crow" <ac...@integrafin.co.uk>
>> Cc: gluster-users
Hi,
I am running two glusterfs servers as replicas. I have a 3rd server
which provides quorum. Since gluster was introduced, we have had an
issue where windows roaming profiles are extremely slow. The initial
setup was done on 3.6.x and since 3.7.x has small file performance
improvements, I
78 matches
Mail list logo