hi guys.
Have anybody managed to run Monero on gluster volume?
On my system, _monero_ daemon starts but soon afterwards
errors out.
many thanks, L.
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge:
hi guys.
So, I've manged to make my volume go haywire, here:
-> $ gluster volume heal VMAIL info
Brick 10.1.1.100:/devs/00.GLUSTERs/VMAIL
Status: Connected
Number of entries: 0
Brick 10.1.1.101:/devs/00.GLUSTERs/VMAIL
/dovecot-uidlist
Status: Connected
Number of entries: 1
Brick
Hi guys.
I have a mount with _systemd_ like so:
[Unit]
Description=00-VMsy
After=network.target
After=glusterd.service
Requires=network-online.target
[Mount]
What=10.1.1.100,10.1.1.99,10.1.1.101:/VMsy
Where=/00-VMsy
Type=glusterfs
Options=acl,log-file=/var/log/glusterfs/mount.00-VMsy.log
and
Hi guys.
I have a mount on three nodes, which themselves are bricks.
If the mount is said to mount like this:
[Mount]
What=arbiter.wire,brick!.wire,brickB.wire:/VMs
then mount on the 'arbiter' fails to mount at boot - mounts
okey on other nodes.
Suffices to put 'arbiter' anywhere but first in
Hi guys.
I sometimes - too often - see my gluster "fails" to start.
Systemd thinks, says 'glusterd' started but in reality...
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/etc/systemd/system/glusterd.service;
enabled; preset: enabled)
Drop-In:
Hi guys.
I got confused, I cannot remember was it always that VOLs
were not available to clients outside of a volume's subnet
or this is new or...
something is not working here for me... I wonder.
eg.
...
Bricks:
Brick1: 10.1.0.100:/devs/00.GLUSTERs/VMs
Brick2:
Hi guys.
'libvirtd' wants to start a VM of a GlusterFS vol and it
fails with:
...
2021-07-05 17:57:56.115+: 1049549: warning :
virSecurityDACSetOwnership:828 : Unable to restore label on
'win10Pro.qcow2'. XATTRs might have been left in
inconsistent state.
internal error: child reported
Hi guys.
I'm trying TLS on my gluster, well, I'd like to think that I
have it done, but...
If I set volume to 'client.ssl on' then stuff brakes -
autofs cannot mount, libvirtd cannot get to the volume via
lbgfapi.
Volume is as:
-> $ gluster volume info VMs | sort
auth.ssl-allow:
Hi guys.
I'm trying to set encryption for my cluster and I hit these:
[2021-03-31 20:06:23.193399] I
[socket.c:4252:ssl_setup_connection_params]
0-tcp.BOBs-server: SSL support for MGMT is NOT enabled IO
path is ENABLED certificate depth is 1 for peer 10.1.1.101:49098
[2021-03-31
On 10/03/2021 04:10, Ravishankar N wrote:
On 09/03/21 11:43 pm, lejeczek wrote:
Hi guys,
I have a simple volume but which seems to suffer from
some problems. (maybe all volumes in the cluster also)
...
[2021-03-09 17:59:08.195634] E [MSGID: 114058]
[client-handshake.c:1455
Hi guys,
I have a simple volume but which seems to suffer from some
problems. (maybe all volumes in the cluster also)
...
[2021-03-09 17:59:08.195634] E [MSGID: 114058]
[client-handshake.c:1455:client_query_portmap_cbk]
0-USER-HOME-ta-2: failed to get the port number for remote
subvolume.
Hi guys
I do:
-> $ gluster peer status
Number of Peers: 5
...
Uuid: 8a5b91ed-f0ab-4344-ba6f-d37408e5f815
State: Peer in Cluster (Disconnected)
Other names:
10.0.0.8
...
that above is from the peer which sees itself disconnected
and sees other peers connected okey.
Also, remaining nodes see
Hi guys
Would anybody know what to make of this here?
-> $ gluster volume heal WORK info
Brick 10.0.0.8:/00.STORAGE/9/0-GLUSTER-WORK
Status: Connected
Number of entries: 0
Brick 10.0.0.7:/00.STORAGE/9/0-GLUSTER-WORK
Status: Connected
Number of entries: 0
Brick
Hi guys,
This what I wonder must not be a first time thought of - why
what is locked in/via libgfapi is not transitioned to fuse?
In case I my choice of words it poor here is a basic example:
- have your libvirt access a gluster vol and run your VMs
off it then fuse-mount that same vol and
hi guys,
where those of use who run gluster from(via) EPEL repo
should go to report bugs?
many thanks, L.
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
hi guys
Would you know if it's possible to format gluster cmd output?
What frustrates me personally is "forced" line wrapping, as
example of:
$ gluster volume status
many thanks, L.
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge:
uide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
>
> It's recommended to either use replica-3 or arbiter Arbiter.
>
> Regards,
>
> On Tue, Jun 30, 2020 at 1:14 PM lejeczek
> mailto:pelj...@yahoo.co.uk>> wrote:
>
> Hi everybody.
>
> I have tw
Hi everybody.
I have two peers in the cluster and a 2-replica volume which
seems okey if it was not for one weird bit - when a peer reboots
then on that peer after a reboot I see:
$ gluster volume status USERs
Status of volume:
hi Guys
I'm seeing "Gfid mismatch detected" in the logs but no split
brain indicated (4-way replica)
Brick
swir-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.USER-HOME
Status: Connected
Total Number of entries: 22
Number of entries in heal pending: 22
Number of entries in split-brain: 0
Number
On 15/12/2018 13:46, Diego Remolina wrote:
> Matt,
>
> Can you test the updated samba packages that the CentOS
> team has built for FasTrack?
>
> A NOTE has been added to this issue.
>
> --
> (0033351) pgreco (developer) -
On 08/01/2020 11:28, Ravishankar N wrote:
>
> On 08/01/20 3:55 pm, lejeczek wrote:
>> On 08/01/2020 02:08, Ravishankar N wrote:
>>> On 07/01/20 8:07 pm, lejeczek wrote:
>>>> Which process should I be gdbing, selfheal's?
>>>>
>>> No the b
On 08/01/2020 02:08, Ravishankar N wrote:
>
> On 07/01/20 8:07 pm, lejeczek wrote:
>> Which process should I be gdbing, selfheal's?
>>
> No the brick process on one of the nodes where file is missing.
>
okey, would you mind showing exec/cmd for debug? I want to be sure
On 07/01/2020 13:11, Ravishankar N wrote:
>
> On 07/01/20 4:38 pm, lejeczek wrote:
>>
>> 3. These files which the brick/replica shows appear to exist on only
>> that very brick/replica:
> Right, so the mknods are failing on the other 2 bricks (as seen from
> th
On 07/01/2020 13:11, Ravishankar N wrote:
>
> On 07/01/20 4:38 pm, lejeczek wrote:
>>
>> 3. These files which the brick/replica shows appear to exist on only
>> that very brick/replica:
> Right, so the mknods are failing on the other 2 bricks (as seen from
> th
On 07/01/2020 07:08, Ravishankar N wrote:
>
>
> On 06/01/20 8:12 pm, lejeczek wrote:
>> And when I start this volume, in log on the brick which shows gfids:
> I assume these messages are from the self-heal daemon's log
> (glustershd.log). Correct me if I am mistaken.
>>
hi everyone.
I see this:
$ gluster volume heal QEMU_VMs info
Brick swir-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
Status: Connected
Number of entries: 0
Brick rider-ring8:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER.QEMU_VMs
/default.sock
/private/sbus-dp_disk.812
/pam
/glusterd.info file and figure out
if you have a file with this uuid at /var/lib/glusterd/peers/*.
If you find any such file, please delete it and restart glusterd
on that node.
On Fri, Oct 11, 2019 at 3:15
PM lejeczek <pelj...@yahoo.co.uk>
hi guys,
as per the subject.
Only one thing that I'd like to tell first is that on that peer/node
Samba runs. Other two peers do not show this in their logs.
I gluster log for the volume I get plenty of:
...
t)
[2019-10-11 09:40:40.768647] E [socket.c:3498:socket_connect]
0-glusterfs:
On 30/01/2019 20:26, Artem Russakovskii wrote:
> I found a similar issue
> here: https://bugzilla.redhat.com/show_bug.cgi?id=1313567. There's a
> comment from 3 days ago from someone else with 5.3 who started seeing
> the spam.
>
> Here's the command that repeats over and over:
> [2019-01-30
hi everyone,
are those options needed to be 'on' if cluster does not use georepl?
> geo-replication.indexing
on
> geo-replication.indexing
on
> geo-replication.ignore-pid-check
on
hi everyone,
I do not suppose it is a matter of tweaking Gluster(or Samba) as these
problems I think started appearing after upgrade of Samba
(samba-4.9.1-6.el7.x86_64) and/or Gluster(glusterfs-6.5-1.el7.x86_64)
How it manifests it that Samba operations that a user performs are
incredibly slow.
hi everyone
I'm been running glusterfs 6 for a while and either I did not notice or
it just started to pop:
[2019-10-07 09:17:37.071409] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/6.5/xlator/mgmt/glusterd.so(+0xe8faa)
[0x7fd6204d3faa]
hi guys,
after a reboot Ganesha export are not there. Suffices to do:
$ systemctl restart nfs-ganesha - and all is good again.
Would you have any ideas why?
I'm on Centos 7.6 with nfs-ganesha-gluster-2.7.6-1.el7.x86_64;
glusterfs-server-6.4-1.el7.x86_64.
many thanks, L.
pEpkey.asc
hi guys,
is it possible to add iface either for global or per volume which
gluster would be available through? And if yes then how?
many thanks, L.
pEpkey.asc
Description: application/pgp-keys
___
Gluster-users mailing list
Gluster-users@gluster.org
hi everyone
I'm hoping devel might be reading this, but if not - anybody tried
glusterfs off PyPy?
If yes and it works then what was/is the experience?
many thanks, L.
pEpkey.asc
Description: application/pgp-keys
___
Gluster-users mailing list
On 09/11/2018 15:08, Kaleb S. KEITHLEY wrote:
On 11/9/18 8:12 AM, lejeczek wrote:
hi guys
I presume because 4.1.x has been in EPEL repo it is confirmed and
validated to work 100% with default samba installation.
GlusterFS — any version — is _not_ in EPEL.
However it is in the CentOS Storage
hi guys
I presume because 4.1.x has been in EPEL repo it is confirmed and
validated to work 100% with default samba installation.
But, I'd prefer to here you guys say you ACTUALLY have your samba work
100% with 4.1.x. Anybody?
many thanks, L.
hi guys
can we mixed bot versions? My cluster is 3.12.x version but I'd like to
add another peer in 4.1.x version. Will that work?
And if yes would it be a good path to migrate everything to 4.1.x, by
adding/replacing nodes/peers with 4.1.x?
many thanks, L.
hi guys,
I have a Samba (centos 7.5) which does not pick up gluster's quota. More
specifically it shows 0 bytes free even if I increase quotas.
Where in gluster I could start troubleshooting, if possible?
many thanks, L.
___
Gluster-users mailing
hi guys
something wrong with my gluster, it saysthere are files
healing but it does not seem like it actually heals anything.
Here is, apologies for biggish snippet, a bit of log from
one volume. I cannot decode it but have a felling that can
expert/devel spot something is not completely okey
it's such a shame devel have not improved these bits yet. Would be nice
to have hooks managed by cli.
You can disable it with removing a hook script exists in
/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
- 원본 메일 -
보낸사람:lejeczek
hi guys
is that configurable somewhere as a global setting?
many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
hi guys
I've had two replicas volume, added third brick and now I see hundred
thousand files to heal, interestingly though only on two bricks that
already constituted the volume.
Volume prior to expansion was, according to gluster, okey and when I
added third brick it immediately started
any mechanics for notifying clients
of changes since most of the logic is in the client, as I understand it.
On Thu, May 03, 2018 at 04:33:30PM +0100, lejeczek wrote:
hi guys will we have gluster with inotify? some
point / never? thanks, L
into gluster would be
great, but I'm not sure gluster has any mechanics for notifying clients
of changes since most of the logic is in the client, as I understand it.
On Thu, May 03, 2018 at 04:33:30PM +0100, lejeczek wrote:
hi guys will we have gluster with inotify? some
point
hi guys
will we have gluster with inotify? some point / never?
thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On 01/05/18 23:59, Vijay Bellur wrote:
On Tue, May 1, 2018 at 5:46 AM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
hi guys
I have a simple case of:
$ setfacl -b
not working!
I copy a folder outside of autofs mounted gluster vol,
hi guys
I have a simple case of:
$ setfacl -b
not working!
I copy a folder outside of autofs mounted gluster vol, to a
regular fs and removing acl works as expected.
Inside mounted gluster vol I seem to be able to
modify/remove ACLs for users, groups and masks but that one
simple, important
hi guys
is it possible to configure things such as log dir or it's
fixed @compile time?
I see the daemon fails to start:
...
[2018-03-09 12:32:57.341142] I [MSGID: 100030]
[glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started
running /usr/sbin/glusterd version 3.13.1 (args:
hi everyone
do you guys know why icrond does not catch/see what happens
inside autofs mounted gluster volumes?
I rsync into such mount point and incron oblivious, from its
perspective nothing happened.
many thanks, L.
___
Gluster-users mailing list
hi everyone
I understand this should be trivial.
I'm amazed by the lack of tool in gluster tool kit which
would swap peer IP when it is the only change in the
cluster. Peer naturally is a brick in volumes.
Or I'm oblivious to the fact that gluster command line
actually does this?
What do
sorry guys to spam a bit - I hope someone from redhat could
check whether - freeipa-us...@redhat.com - is up & ok?
I've been a subscriber for a couple of years but now,
suddenly(?) I cannot mail there, I get:
"
Sorry, we were unable to deliver your message to the
following address.
hi everyone
I think geo-repl needs ssh and keys in order to work, but
does anything else? Self-heal perhaps?
Reason I ask is that I had some old keys gluster put in when
I had geo-repl which I removed and self-heal now rouge,
cannot get statistics:
..
Gathering crawl statistics on volume
hi people
I wonder if anybody experience any problems with vols in
replica mode that run across IPoIB links and libvirt stores
qcow image on such a volume?
I wonder if maybe devel could confirm it should just work,
and then hardware/Infiniband I should blame.
I have a direct IPoIB link
hi everyone
I assume such a situation where network segement changes, in
a simplest case where one provides a box(a brick) a new
faster net iterface. So after boxes have two nics and then
bricks gets introduced to them via gluster probe $_newIPs.
Ideally @ a developer - how gluster handles
On 28/09/17 17:05, lejeczek wrote:
On 13/09/17 20:47, Ben Werthmann wrote:
These symptoms appear to be the same as I've recorded in
this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
<atin.mukhe
a...@redhat.com <mailto:gya...@redhat.com>> wrote:
Please send me the logs as well i.e glusterd.logs
and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>>
wrote:
I emailed the logs earlier to just you.
On 13/09/17 11:58, Gaurav Yadav wrote:
Please send me the logs as well i.e glusterd.logs and
cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
On 13/09/17 06:21,
On 13/09/17 06:21, Gaurav Yadav wrote:
Please provide the output of gluster volume info, gluster
volume status and gluster peer status.
Apart from above info, please provide glusterd logs,
cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
<pelj...@yahoo.co
On 12/09/17 12:59, Niels de Vos wrote:
On Tue, Sep 12, 2017 at 10:01:14AM +0100, lejeczek wrote:
@devel
hi, I wonder who takes care of man pages when it comes to rpms?
I'd like to file a bugzilla report and would like to make sure it's packages
mainainer(s) are responsible for incomplete man
@devel
hi, I wonder who takes care of man pages when it comes to rpms?
I'd like to file a bugzilla report and would like to make
sure it's packages mainainer(s) are responsible for
incomplete man pages.
Often man pages are neglected by authors, too often, and man
is, should always be "the
hi everyone
I have 3-peer cluster with all vols in replica mode, 9 vols.
What I see, unfortunately, is one brick fails in one vol,
when it happens it's always the same vol on the same brick.
Command: gluster vol status $vol - would show brick not online.
Restarting glusterd with systemclt does
hi guys/gals
I realize that this question must have been asked before, I
sroogled and found some posts on the web on how to
tweak/tune gluster, however..
What I hope is that some experts and/or devel could write a
bit more, maybe compose a doc on - How to investigate and
trouble gluster's
.. or just a way to make samba, when samba shares via
glusterfs api, to show in a share/vol a submount(like fs
bind) - how, if possible at all?
I guess it would have to be some sort of crossing from one
vol to another vol's dir or something, hmm...
many thanks, L.
healed on volume QEMU-VMs
has been unsuccessful on bricks that are down. Please check
if all brick processes are running.
On 04/09/17 11:47, Atin Mukherjee wrote:
Please provide the output of gluster volume info, gluster
volume status and gluster peer status.
On Mon, Sep 4, 2017 at
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect -
r/lib/glusterd/vols//info" and remove
"tier-enabled=0".
3.Restart glusterd services
4.Peer probe again.
Thanks
Gaurav
On Thu, Aug 31, 2017 at 3:37 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
attached the lot as per your request.
W
t
cksum, causing state in "State: Peer Rejected (Connected)".
This inconsistency arise due to upgrade you did.
Workaround:
1.Go to node 10.5.6.17
2.Open info file from
"/var/lib/glusterd/vols//info" and remove
"tier-enabled=0".
3.Restart glusterd services
4.Pee
On 30/08/17 07:18, Gaurav Yadav wrote:
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/" directory from all the
nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek
<pelj...@y
I wonder - was it upgrade from 3.8 to 3.10 that caused this
problem.
Can these file be deleted? And if yes would this be enough?
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
hi fellas,
same old
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
hi
I see:
..
[2017-08-29 12:53:41.708756] W [MSGID: 101095]
[xlator.c:162:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/3.10.5/xlator/features/ganesha.so:
cannot open shared object file: No such file or directory
..
and I wonder.. because nothing provides that lib(in terms of
rpm
hi there
I run off 3.10.5, have 3 peers with vols in replication.
Each time I copy some data on a client(which is a peer too)
I see something like it:
# for QEMU-VMs:
Gathering count of entries to be healed on volume QEMU-VMs
has been successful
Brick
hi fellas
I wonder if gluster with a peer connected via vpn tunnel is
something you would use for production?
@devel - is such a scenario even a valid(approved) one?
many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
what I've just notice - the brick in question does show up as:
Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-GROUP-WORKN/A N/A
N N/A
for one particular vol. Status for other vols(so far) shows
it ok.
Would this be volume problem or brick problem,
also, now after the upgrade gluster claims, on some vols,
log list in heal info, and these in these amongst:
Brick
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-USER-HOME
Status: Connected
what are these entries?
On 02/08/17 02:19, Atin Mukherjee wrote:
This means shd client is not
On 02/08/17 02:22, Atin Mukherjee wrote:
Are you referring to other names of peer status output? If
so, then a peerinfo entry having other names populated
means it might be having multiple n/w interfaces or the
reverse address resolution is picking this name. But why
are you worried on the
But I had not killed anything, unless system did for some
reason and silently, but I'd not think so.
It seems that one brick is particularly ill about it all.
I'd have to restart it but mostly this would not do and
actually reboot the system, then for I short while it would
be ok only soon
.. is this default/desired behaviour?
And is this configurable/controllable behaviour?
I'm thinking - it would be nice not to have whole vol go
read-only(three peers in cluster) but at the same time have
gluster alert/highlight the problem to a user/admin.
ver. 3.10.3
thanks.
L.
On 27/07/17 14:13, lejeczek wrote:
... or in other words - can samba break (on Centos 7.3) if
one goes with gluster version to high?
hi fellas.
I wonder because I see:
smbd[4088153]: Unknown gluster ACL version: -847736808
smbd[4088153]: [2017/07/27 13:12:54.047332, 0]
../source3
... or in other words - can samba break (on Centos 7.3) if
one goes with gluster version to high?
hi fellas.
I wonder because I see:
smbd[4088153]: Unknown gluster ACL version: -847736808
smbd[4088153]: [2017/07/27 13:12:54.047332, 0]
on number of bricks there might
be too many brick ops involved. This is the reason we
introduced --timeout option in CLI which can be used to
have a larger time out value. However this fix is
available from release-3.9 onwards.
On Mon, Jul 24, 2017 at 3:54 PM, lejeczek
<pelj...@yahoo.co.uk <
hi fellas
would you know what could be the problem with: vol status
detail times out always?
After I did above I had to restart glusterd on the peer
which had the command issued.
I run 3.8.14. Everything seems to work a ok.
many thanks
L.
___
dear fellas
I've a pool, it's same one subnet. Now I'd like to add a
peer which is on a subnet which will not be
available/accessible to all the peers.
a VolV
peer X 10.0.0.1 <-> 10.0.0.2 peer Y 192.168.0.2 <-> peer Z
192.168.0.3 # so here 192.168.0.3 and 10.0.0.1 do not see
each other.
hi everyone,
this I'd guess it tunable somewhere? yet I cannot get there.
I see:
[2017-03-16 16:55:32.264981] W
[fuse-bridge.c:1291:fuse_err_cbk] 0-glusterfs-fuse: 426:
ACCESS() /me => -1 (Permission denied)
[2017-03-16 16:55:32.265192] W
[fuse-bridge.c:1291:fuse_err_cbk] 0-glusterfs-fuse:
hi
there is a vol with:
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
storage.owner-uid: 107
storage.owner-gid: 107
As you see it is for libvirt/qemu, which all working fine, I
cannot think at this
or socket exhaustion.
something to do with kernel version, I run centos off
kernel-ml and v.4.9.5 was where this message persisted, now
with 4.9.6 it's gone.
I wonder if gluster dev guys rest centos release also
against ml kernels.
thanks,
L.
On February 17, 2017 7:47:23 AM PST, lejeczek
hi everyone,
I've been browsing list's messages and it seems to me that
users struggle, I do.
I do what I thought was simple, I follow official docs.
I, as root always do..
]$ gluster system:: execute gsec_create
]$ gluster volume geo-replication WORK
10.5.6.32::WORK-Replica create push-pem
hi everyone
I have a volume (there is more that should be working but
are scheduled for tonight so will know later) that
rdiff-backup fails to backup.
The error rdiff-backup spits out I see for the first time.
There in only one thing I could think of which I could link
with this problem -
hi everyone
I see that something like this in mount:
... 127.0.0.1,10.5.6.100 make mounts available as long as
one server is up & running.
Is that a rule of thumb and means that when both vol servers
are available mount will be touching/taking ONLY to the
first one, unless it goes down for
On 08/02/17 10:06, Kotresh Hiremath Ravishankar wrote:
Hi lejeczek,
Try stop force.
gluster vol geo-rep :: stop force
I think something broke:
[2017-02-12 18:30:09.970683] E
[resource(/__.aLocalStorages/3/0-GLUSTERs/3GLUSTER--DATA):234:errlog]
Popen: command &quo
hi everyone
my gluster insists that:
~]$ gluster vol quota DATA list
Path Hard-limit
Soft-limit Used Available Soft-limit exceeded? Hard-limit
exceeded?
On 09/02/17 06:07, Nag Pavan Chilakam wrote:
- Original Message -
From: "lejeczek" <pelj...@yahoo.co.uk>
To: "Nag Pavan Chilakam" <nchil...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Wednesday, 8 February, 2017 7:15:29 PM
Subject: Re: [Gluster-u
many thx.
L.
thanks,
nagpavan
- Original Message -
From: "lejeczek" <pelj...@yahoo.co.uk>
To: "Nag Pavan Chilakam" <nchil...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Tuesday, 7 February, 2017 10:53:07 PM
Subject: Re: [Gluster-users] Input/output e
iles directly from bricks?
many thanks,
L.
regards,
nag pavan
- Original Message -
From: "lejeczek"<pelj...@yahoo.co.uk>
To:gluster-users@gluster.org
Sent: Tuesday, 7 February, 2017 2:00:51 AM
Subject: [Gluster-users] Input/output error - would not heal
hi all
I'm hittin
-file setup on the slave nodes.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create
push-pem force
4. Start geo-rep
Thanks and Regards,
Kotresh H R
- Original Message -
From: "lejeczek" <pelj...@yahoo.co.uk>
To: gluster-users@gluster.org
Sent: Th
dear all
should gluster update geo repl when a volume changes?
eg. bricks are added, taken away.
reason I'm asking is because it doe not seem like gluster is
doing it on my systems?
Well, I see gluster removed a node form geo-repl, brick that
I removed.
But I added a brick to a vol and it's
On 01/02/17 19:30, lejeczek wrote:
On 01/02/17 14:44, Atin Mukherjee wrote:
I think you have hit
https://bugzilla.redhat.com/show_bug.cgi?id=1406411 which
has been fixed in mainline and will be available in
release-3.10 which is slated for next month.
To prove you have hit the same
don't
see those errors any more.
Should I now be looking at something particular more closely?
b.w.
L.
On Wed, Feb 1, 2017 at 7:49 PM, lejeczek
<pelj...@yahoo.co.uk <mailto:pelj...@yahoo.co.uk>> wrote:
hi,
I have a four peers gluster and one is failing, we
hi,
I have a four peers gluster and one is failing, well, kind of..
If on a working peer I do:
$ gluster volume add-brick QEMU-VMs replica 3
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
force
volume add-brick: failed: Commit failed on whale.priv Please
check log file for
1 - 100 of 122 matches
Mail list logo