No, there is no firewall.
Tomasz Chmielewski
On 2019-03-24 04:54, Strahil wrote:
Hi Tomasz,
Do you have a firewall in between the nodes?
Can you test with local firewall (on each node) down ?
Best Regards,
Strahil NikolovOn Mar 23, 2019 05:39, Tomasz Chmielewski
wrote:
There are three
N/A
N/AY 2520
Task Status of Volume storage
--
There are no active volume tasks
Tomasz Chmielewski
https://lxadm.com
___
were also running:
chown user:group /gluster/mount/dir/$i /gluster/mount/dir/$i/_logs
/gluster/mount/dir/$i/pub
Every 5 mins on all servers; probably that contributed to the problem as
well.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users
3.2.6 on Debian Squeeze, and seeing the very same problem
on different servers.
It only seem to affect directories.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman
On 01/08/2013 06:33 PM, Tomasz Chmielewski wrote:
[root@ca2.sg1 /]# attr -l /data/gluster/lfd/techstudiolfc/pub
Attribute gfid has a 16 byte value for /data/gluster/lfd/techstudiolfc/pub
Attribute afr.shared-client-0 has a 12 byte value for
/data/gluster/lfd/techstudiolfc/pub
) self heal kicking in on
one of the servers.
I didn't test it very intensively - but on the other hand, I *only* see these
failing self-heals on directories created automatically, as described above.
Does it make any sense?
--
Tomasz Chmielewski
http://wpkg.org
(basically,
removing that attribute)?
setfattr -x trusted.afr.shared-client-0 /data/gluster/lfd/techstudiolfc/pub
setfattr -x trusted.afr.shared-client-1 /data/gluster/lfd/techstudiolfc/pub
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing
blocks to free some memory for glusterfsd...
http://gluster.org/pipermail/gluster-users/2011-January/006477.html
https://bugzilla.redhat.com/show_bug.cgi?id=GLUSTER-2320
--
Tomasz Chmielewski
http://www.ptraveler.com
-Original Message-
From: Tomasz Chmielewski man...@wpkg.org
Date
on localhost,
- around 300 Mbit/s with NFS mounts to glusterd on localhost,
- around 500 Mbit/s with NFS mounts to glusterd on localhost, and
fsc/cachefilesd.
Killing the option to use NFS mounts on localhost is certainly quite the
opposite to my performance needs!
--
Tomasz Chmielewski
http
On 07/13/2012 05:46 PM, David Coulson wrote:
On 7/13/12 5:29 AM, Tomasz Chmielewski wrote:
Killing the option to use NFS mounts on localhost is certainly quite
the opposite to my performance needs!
He was saying you can't run kernel NFS server and gluster NFS server at
the same time
, mount (on servers using gluster mount).
Is it expected behaviour with gluster and NFS mounts on localhost? Can
it be caused by some kind of deadlock? Any workarounds?
--
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
--
Tomasz Chmielewski
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On 07/09/2012 05:20 PM, Tomasz Chmielewski wrote:
I've once disabled NFS in gluster with this:
gluster volume set sites nfs.disable off
Now, I wanted to enable it again, so I did:
gluster volume set sites nfs.disable on
Of course it was made in the other order (nfs.disable
.
Both are ~30 MB downloads, around 1 GB after uncompressing.
Note that /var/log/glusterfs/var-ftp-sites.log was truncated, as it was
overfilling the partition.
The cluster has 8 more machines with a similar size of the log (like
ca2.tar.bz2).
--
Tomasz Chmielewski
sites fix-layout start
After fix-layout was complete, I've run:
gluster volume rebalance sites start
Which also completed successfully. The flood of messages is still there.
Right now, I've started fix-layout once again, but I still see the
same flood of messages.
--
Tomasz Chmielewski
http
rebalance.
--
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
the existing data, and
hope it will mirror the data, not remove it (I'm concern that xattrs
will be somehow confusing glusterfs)
--
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin
nginx after that,
sometimes nginx wouldn't start, as there is some glusterfs local process with
an open connection to port 80 (locally)
--
Tomasz Chmielewski
http://www.ptaveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http
) delete the distribute volume
2) create a distribute-replicate volume
3) run the self-heal, which hopefully results in the data moved to the
other brick, *not* removed?
--
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster
ESTABLISHED
13843/glusterfs
Why does glusterfs do it?
Killing glusterfs and mounting again usually fixes the problem.
--
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin
:/data/ca1 abort
replace-brick abort failed
root@ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs
ca2-int:/data/ca1 start
replace-brick failed to start
--
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
packages? Will they be provided anytime soon?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
-4d6c-8622-c7586d539ead
State: Peer in Cluster (Connected)
Hostname: gl5
Uuid: 8c8d980f-15f2-4345-90f2-f75365bf9812
State: Peer in Cluster (Connected)
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http
On 09.07.2011 09:41, Anand Avati wrote:
On Wed, Jul 6, 2011 at 6:58 AM, Tomasz Chmielewski man...@wpkg.org
mailto:man...@wpkg.org wrote:
One of the peers is behaving very slow - load is constantly around
6-15, and generally all connected clients work very slow.
As soon
, please someone correct me if it's needed, as pointed by Papp,
documentation seems to suggest is not needed:
You can set volume options, as needed, while the cluster is
online and available.
--
Tomasz Chmielewski
http://wpkg.org
of the peers opening so many files?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
; 3.1.4 works fine.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
configure it.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
.
Is it expected?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: web3-int:/home/gluster-data
Brick2: web4-int:/home/gluster-data
Options Reconfigured:
auth.allow: 192.168.1.*
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users
On 28.05.2011 01:36, Mohit Anchlia wrote:
What about ping web3-int?
It's the local server, and it works, too.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo
repeatable for you, please file
a bug report and describe the way to reproduce it.
Since the problems happen so often for you, I'm sure it shouldn't be so
hard to produce a good test case.
Initiating flame discussions is not really a good development model.
--
Tomasz Chmielewski
http
recommended to set
reasonable timeout options there as well, depending on your network
infrastructure and usage/maintenance patterns. Incidentally, iSCSI is
also kernelspace.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users
) and it works fine for me.
3.0.x was also crashing for me when SSHFS-like mount was used to the
server with gluster mount (and reads/writes were made from the gluster
mount through it).
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing
about config problems with
3.1.4 - 3.2.0 updates, so I figured it would be the easiest for me to
totally remove 3.2.0 and its config and start from scratch with 3.1.4.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users
be it, as the mount point is accessed pretty heavily
(interesting why 3.1.4 doesn't show this behaviour though).
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
.
Is it a known issue? I'm seeing it on all gluster clients, so it doesn't
seem to be any isolated issue.
It is a new 3.2.0 installation, not an upgrade.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On 09.05.2011 13:08, Tomasz Chmielewski wrote:
On 09.05.2011 12:48, Mohammed Junaid Ahmed wrote:
Hi Tomasz,
Can you attach the logfiles, that will help. When does it happen - Does this
happen just after the mount? What is the state of the server processes (are
they busy or idle
, but not
/mnt/gluster/some/file/2.gif
Can you try
to read using strace for the files you see hanging and post it here?
It might help developers to take a look.
I also suggest opening a bug since it looks like a critical issue.
I'll try to do some more debugging next time I see it.
--
Tomasz
/lucene/all.index/_i82.cfs, O_RDONLY) = 3
I don't think I will be able to debug it any further, as I can't allow any more
failures on this system.
I'll downgrade to 3.1.4 to see if it behaves any better.
--
Tomasz Chmielewski
http://wpkg.org
On 06.05.2011 18:17, Whit Blauvelt wrote:
On Fri, May 06, 2011 at 06:06:02PM +0200, Tomasz Chmielewski wrote:
Read access should be fine though (with noatime mount), and
shouldn't break things?
Nice question.
Even if it is safe to only read (seems it should be), does mounting through
fuse
, the client will not be
able to access the data.
Could anyone shed some light on achieving high availability with glusterfs?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman
On 06.05.2011 22:57, Tomasz Chmielewski wrote:
Assuming we have a distributed replica of two gluster servers:
server1-server2
And several clients (client1, client2, ..., clientN).
If we use the following command line:
mount -t glusterfs server1:/test-volume /mnt/glusterfs
Our mount
On 06.05.2011 23:02, Vikas Gorur wrote:
On May 6, 2011, at 1:57 PM, Tomasz Chmielewski wrote:
If we use the following command line:
mount -t glusterfs server1:/test-volume /mnt/glusterfs
Our mount will die if server1 is offline.
The mount will not die as long as you have atleast one
Is it possible to use FS-Cache with glusterfs?
Since it is problematic to work with lots of small files and glusterfs,
I was wondering if FS-Cache could be of any help here (provided that
glusterfs, or perhaps fuse, could use it)?
--
Tomasz Chmielewski
http://wpkg.org
shared performance.read-ahead on
gluster volume set shared performance.io-cache on
I couldn't find it documented for 3.1.x (where for 3.0.x, it was
sufficiently documented).
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
) io-cache - readahead - writebehind - quick-read - replicate1
(or any other, similar combination)?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
performance/quick-read
option cache-timeout 4
option max-file-size 1024000
subvolumes io-cache
end-volume
volume stat-prefetch
type performance/stat-prefetch
subvolumes quick-read
end-volume
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster
On 15.09.2010 00:44, Vikas Gorur wrote:
On Sep 14, 2010, at 1:59 PM, Tomasz Chmielewski wrote:
On 14.09.2010 22:56, Douglas Stanley wrote:
Odd, on my 10.04 box, the version is only 3.0.2. Where'd you get
a 3.0.5 version? I've seen that in debian testing, but not in
ubuntu 10.04 yet.
http
initially did chown www-data:www-data images on one of the servers.
I'm using glusterfs 3.0.5 on Ubuntu 10.04.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo
, 3.0.5 on Ubuntu)
- fuse (2.7.4 on Debian, 2.8.1 on Ubuntu)
- loads of other userspace...
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On 14.09.2010 22:50, Tomasz Chmielewski wrote:
On 14.09.2010 22:43, Douglas Stanley wrote:
Did you do the chown operation to the mounted gluster filesystem, or
to the exported by gluster filesystem on one of your storage bricks?
What I mean is, is /shared/www what is exported in your
On 14.09.2010 22:56, Douglas Stanley wrote:
Odd, on my 10.04 box, the version is only 3.0.2. Where'd you get a
3.0.5 version? I've seen that in debian testing, but not in ubuntu
10.04 yet.
http://ftp.gluster.com/pub/gluster/glusterfs/3.0/LATEST/Ubuntu/
--
Tomasz Chmielewski
http://wpkg.org
subvolumes writebehind
end-volume
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
also use -
so I'll try to disable it and see if it changes anything.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Am 07.06.2010 14:10, Tomasz Chmielewski wrote:
Am 07.06.2010 13:55, Daniel Maher wrote:
Any issue what can be wrong here? Neither the client nor the servers
produce anything in logs when it happens (I didn't wait for more than 10
minutes though).
What distro ? What kernel version ? Hardware
Am 07.06.2010 14:27, Daniel Maher wrote:
On 06/07/2010 02:24 PM, Tomasz Chmielewski wrote:
What distro ? What kernel version ? Hardware specs ?
Debian Lenny, 64 bit, 2.6.26 kernel.
The specs are more or less high end.
I see there is a bug entry describing a similar issue:
http
images of running virtual machines?
How resilient can we expect gluster to be when:
1) the host writing to glusterfs crashes,
2) one of glusterfs mirrors crashes.
Anyone with some experience here?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster
at least one of
glusterfs servers)?
time dd if=/dev/zero of=/bigfile bs=1M count=5000
time dd if=/bigfile of=/dev/null bs=64k
And drop caches between each run with:
echo 3 /proc/sys/vm/drop_caches
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users
?
Any other ideas?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
also consider enabling
compression in the VPN tunnel; this could technically increase your
throughput.
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster
Mark Mielke wrote:
On 10/21/2009 06:42 PM, Tomasz Chmielewski wrote:
Tomasz Chmielewski wrote:
How can I make the glusterfs client re-read its configuration file?
I added some more glusterfs servers and would like the clients to use
them.
How can I do it, without unmounting and mounting
Tomasz Chmielewski wrote:
How can I make the glusterfs client re-read its configuration file?
I added some more glusterfs servers and would like the clients to use them.
How can I do it, without unmounting and mounting everything on clients
from scratch?
Nobody knows?
Should I assume it's
How can I make the glusterfs client re-read its configuration file?
I added some more glusterfs servers and would like the clients to use them.
How can I do it, without unmounting and mounting everything on clients
from scratch?
--
Tomasz Chmielewski
http://wpkg.org
sac wrote:
Hi Matt,
On the mount point, just a ls -lR will trigger self-heal and the data will
be synced.
Which way will it be synced, assuming you have two AFR servers?
--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster
67 matches
Mail list logo