-qlength:
1features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid:
36network.ping-timeout: 30performance.strict-o-direct:
oncluster.granular-entry-heal: enablecluster.enable-shared-storage: enable
Network: 1 gbit/s
Filesystem:XFS
Best Regards,Strahil Nikolov
oVirt and Gluster dev
teams.
Best Regards,Strahil Nikolov
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
t2 are on /dev/gluster_vg_ssd/gluster_lv_engine
, while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine.
Is that the issue ? Should I rename my brick's VG ?If so, why there is no
mentioning in the documentation ?
Best Regards,Strahil Nikolov
___
Gl
0.16 1.58
As you can see - all bricks are thin LV and space is not the issue.
Can someone hint me how to enable debug , so gluster logs can show the reason
for that pre-check failure ?
Best Regards,Strahil Nikolov
В сряда, 10 април 2019 г., 9:05:15 ч. Гринуич-4, Rafi Kavungal Chun
I hope this is the last update on the issue -> opened a bug
https://bugzilla.redhat.com/show_bug.cgi?id=1699309
Best regards,Strahil Nikolov
В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov
написа:
Hi All,
I have tested gluster snapshot without systemd.automo
reads: off
cluster.enable-shared-storage: enable
Any issues expected when downgrading the version ?
Best Regards,Strahil Nikolov
В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil
написа:
Hello Community,
I have been left with the impression that FUSE mounts will read from both
loating IPs and taking care of the
NFS locks, so no disruption will be felt by the clients.
Still, this will be a lot of work to achieve.
Best Regards,
Strahil Nikolov
On Apr 30, 2019 15:19, Jim Kinney wrote:
>
> +1!
> I'm using nfs-ganesha in my next upgrade so my client s
It seems that I got confused.So you see the files on the bricks (servers) ,
but not when you mount glusterfs on the clients ?
If so - this is not the sharding feature as it works the opposite way.
Best Regards,Strahil Nikolov
В четвъртък, 16 май 2019 г., 0:35:04 ч. Гринуич+3, Paul van der
you can keep the
number of servers the same ... still the server bandwidth will be a limit at
some point .
I'm not sure how other SDS deal with such elasticity . I guess many users in
the list will hate me for saying this , but have you checked CEPH for your
needs ?
Best Regards,Strahil Ni
ter Docs
Project documentation for Gluster Filesystem
|
|
|
Thanks in advance for your response.
Best Regards,Strahil Nikolov
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314
NA/EMEA Schedule -
Every
It seems that whenever I reboot a gluster node , I got this problem - so it's
not an arbiter issue.Obviously there is something wrong with v6.6 ,as I never
had such issues with v6.5 .
Any ideas where should I start this up ?
Best Regards,Strahil Nikolov
В сряда, 13 ноември 2019 г., 22:
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
[Install]
RequiredBy=shutdown.target
Of course systemd has to be reloaded :)
Best Regards,
Strahil Nikolov
В сряда, 27 ноември 2019 г., 8:07:52 ч. Гринуич-5, Sankarshan Mukhopadhyay
написа:
On Wed, Nov 27, 2019 at 6:10 PM Rav
mance.
Best Regards,Strahil Nikolov
В четвъртък, 19 декември 2019 г., 02:28:55 ч. Гринуич+2, David Cunningham
написа:
Hi Raghavendra and Strahil,
We are using GFS version 5.6-1.el7 from the CentOS repository. Unfortunately we
can't modify the application and it expects to read
a sysctl with following
>> parameters.
>> >
>> > net.ipv6.conf.all.disable_ipv6 = 1
>> > net.ipv6.conf.default.disable_ipv6 = 1
>> >
>> > That did not help.
>> >
>> > Volumes are configured with inet.
>> >
>> > sudo glu
ompatibility mode 4.2 and there were 2 older VM's which had snapshots
>from
>prior versions, while the leaf was in compatibility level 4.2. note;
>the
>backup was taken on the engine running 4.3.
>
>Thanks Olaf
>
>
>
>Op di 28 jan. 2020 om 17:31 schreef Strahil Nikolo
luster.org/mailman/listinfo/gluster-users
Hi Ravi,
This is the third time an oVirt user (one is me and I think my email is in the
list) that report such issue.
We need a through investigation as this is reoccurring.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
APAC Schedul
Hello Community,
I am experiencing again the issue with the ACL and none of the fixes ,
previously stated, are helping out.
Bug report -> https://bugzilla.redhat.com/show_bug.cgi?id=1797099
Any ideas would be helpful.
Best Regards,
Strahil Nikolov
Community Meeting Calen
e bug report, some other things should be different.
>>
>>
>> Greetings,
>>
>> Paolo
>>
>> Il 06/02/20 23:30, Christian Reiss ha scritto:
>>> Hey,
>>>
>>> I hit this bug, too. With disastrous results.
>>> I second this po
What version of gluster are you using ?
In my case only a downgrade has restored the operation of the cluster, so you
should consider that as an option (last, but still an option).
You can try to run a find against the fuse and 'find /path/to/fuse -exec
setfacl -m u:root:rw {} \;'
Ma
On February 10, 2020 5:32:29 PM GMT+02:00, Matthias Schniedermeyer
wrote:
>On 10.02.20 16:21, Strahil Nikolov wrote:
>> On February 10, 2020 2:25:17 PM GMT+02:00, Matthias Schniedermeyer
> wrote:
>>> Hi
>>>
>>>
>>> I would describe our basic use
an 2 weeks where 2k+ machines were read-only,
before the vendor provided a new patch), so the issues in Gluster are nothing
new and we should not forget that Gluster is free (and doesn't costs millions
like some arrays).
The only mitigation is to thoroughly test each patch on a cluster th
ng%20Workload/
This information will allow more experienced adminiatrators and the developers
to identify any pattern that could cause the symptoms.
Tuning Gluster is one of the hardest topics, so you should prepare yourself
for a lot of test untill you reach the optimal settings for your
g list
>> > Gluster-users@gluster.org mailto:Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> > >
>>
>>
>> Met vriendelijke groet, With kind regards,
>>
>>
didn't have the
necessary data.
Another way to migrate the data is to:
1. Add the new disks on the old srv1,2,3
2. Add the new disks to the VG
3. pvmove all LVs to the new disks (I prefer to use the '--atomic' option)
4. vgreduce with the old disks
5. pvremove the old disks
6. The
t;> node’s authorized_keys file, So that if anyone gain access using
>this key
>>> can access only gsyncd command.
>>>
>>> ```
>>> command=gsyncd ssh-key….
>>> ```
>>>
>>>
>>>
>>> Thanks for your help.
>
Hi Felix,
can you test /on non-prod system/ the latest minor version of gluster v6 ?
Best Regards,
Strahil Nikolov
В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow
написа:
Dear Community,
this message appears for me to on GlusterFS 6.0.
Before that, we had GlusterFS
t;>>
>>>> Security: Command prefix is added while adding public key to remote
>>>> node’s authorized_keys file, So that if anyone gain access using
>this key
>>>> can access only gsyncd command.
>>>>
>>>> ```
>>>> command=gsyncd ssh-key….
>>>> ```
>>>>
>>>>
>>>>
>>>> Thanks for your help.
>>>>
>>>> --
>>>> David Cunningham, Voisonics Limited
>>>> http://voisonics.com/
>>>> USA: +1 213 221 1092
>>>> New Zealand: +64 (0)28 2558 3782
>>>>
>>>>
>>>>
>>>>
>>>> Community Meeting Calendar:
>>>>
>>>> Schedule -
>>>> Every Tuesday at 14:30 IST / 09:00 UTC
>>>> Bridge: https://bluejeans.com/441850968
>>>>
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>>
>>>>
>>>> —
>>>> regards
>>>> Aravinda Vishwanathapura
>>>> https://kadalu.io
>>>>
>>>>
>>>
>>> --
>>> David Cunningham, Voisonics Limited
>>> http://voisonics.com/
>>> USA: +1 213 221 1092
>>> New Zealand: +64 (0)28 2558 3782
>>>
>>
>>
>> --
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>>
>>
>>
Hey David,
Why don't you set the B cluster's hostnames in /etc/hosts of all A cluster
nodes ?
Maybe you won't need to rebuild the whole B cluster.
I guess the A cluster nodes nees to be able to reach all nodes from B cluster,
so you might need to change the firewall settings.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
emains in v6.0. Actually, we do not have a non-prod gluster system, so
>it will take some time
>
>to do this.
>
>Regards,
>
>Felix
>
>
>On 02/03/2020 23:25, Strahil Nikolov wrote:
>> Hi Felix,
>>
>> can you test /on non-prod system/ the latest minor
mber of Bricks: 1
>> >>>> Transport-type: tcp
>> >>>> Bricks:
>> >>>> Brick1: myhost:/nodirectwritedata/gluster/gvol0
>> >>>> Options Reconfigured:
>> >>>> transport.address-family: inet
>> >
t; Pid : 4650
>> File System : xfs
>> Device : /dev/sda
>> Mount Options : rw
>> Inode Size : 512
>> Disk Space Free : 325.3GB
>> Total Disk Space : 91.0TB
>> Inode Count : 6920019
M, Pat Haley wrote:
>>
>> Hi,
>>
>> I get the following
>>
>> [root@mseas-data2 bricks]# gluster volume get data-volume all | grep
>
>> cluster.min-free
>> cluster.min-free-disk 10%
>> cluster.min-free-inodes 5%
>>
>>
>> On 3
c. As before,
>
>the performance is lower than the individual brick performance. Is this
>a normal behavior or
>
>or what can be done to improve the single client performance as pointed
>out in this case?
>
>
>Regards,
>
>Felix
>
>
>
>
>On 20/02/2020 22:26,
-mediaslave-node Active
>Hybrid Crawl N/A
>
>Any idea? please. Thank you.
Hi Etem,
Have you checked the log on both source and destination. Maybe they can hint
you what the issue is.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedu
_all_smalldom.m pe_PB.in
>PE_Data_Comparison_glider_sp011_smalldom.m pe_PB.log
>PE_Data_Comparison_glider_sp064_smalldom.m pe_PB_short.in
>PeManJob.log PlotJob
>
>mseas(DSMccfzR75deg_001b)% ls PeManJob
>PeManJob
>
>mseas(DSMccfzR75deg_001b)% ls
2020-03-11 20:08:55.286410] I [master(worker
>/srv/media-storage):1441:process] _GMaster: Batch Completed
>changelog_end=1583917610 entry_stime=None changelog_start=1583917610
>stime=None duration=153.5185 num_changelogs=1 mode=xsync
>[2020-03-11 20:08:55.315442] I [master(worker
>/srv/media
[2020-03-08 09:49:51.705982] I [MSGID: 114046]
>> [client-handshake.c:1105:client_setvolume_cbk]
>0-media-storage-client-0:
>> Connected to media-storage-client-0, attached to remote volume
>> '/srv/media-storage'.
>> [2020-03-08 09:49:51.707627] I [fu
=-=-=-=-=-=-=-
>Pat Haley Email: pha...@mit.edu
>Center for Ocean Engineering Phone: (617) 253-6824
>Dept. of Mechanical EngineeringFax:(617) 253-8125
>MIT, Room 5-213http://web.mit.edu/phaley/www/
>77 Massachusetts Avenue
; Could you try disabling syncing xattrs and check ?
>>
>> gluster vol geo-rep :: config
>sync-xattrs
>> false
>>
>> On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov
>
>> wrote:
>>
>>> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu&
On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici
wrote:
>Dear All,
>
>some users tht use regularly our gluster file system are experiencing a
>strange error during attempting to remove a empty directory.
>All bricks are up and running, no perticular error has been detected,
>but they are not
tal 8
drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..
Any other idea related this issue?
Many thanks,
Mauro
> On 25 Mar 2020, at 18:32, Strahil Nikolov wrote:
>
> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici
> wrote:
>
Take a look at Stefan Solbrig's e-mail
Best Regards,
Strahil Nikolov
В сряда, 25 март 2020 г., 22:55:23 Гринуич+2, Mauro Tridici
написа:
Hi Strahil,
unfortunately, no process is holding file or directory.
Do you know if some other community user could help me?
Thank you,
1850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>Erik Jacobson
>Software Engineer
>
>erik.jacob...@hpe.com
>+1 612 851 0550 Office
>
>Eagan, MN
>
nd
thus a 'replica 3' volume or a 'replica 3 arbiter 1' volume should be used and
a different set of options are needed (compared to other workloads).
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09
Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster-users@gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users
Everything was moved
n SSD (for example
mounted on /gluster) from which you can create 6 directories.The arbiter stores
only metadata and the SSD random access performance will be the optimal
approach.
Something like:
arbiter:/gluster/data1
arbiter:/gluster/data2
arbiter:/gluster/data3
arbiter:/gluster/data4
arb
ready known?
>>
>>
>> Regards,
>> Hubert
>
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster
>
>Best Regards,
>Hubert
>
>Am Sa., 11. Apr. 2020 um 11:12 Uhr schrieb Strahil Nikolov
>:
>>
>> On April 11, 2020 8:40:47 AM GMT+03:00, Hu Bert
> wrote:
>> >Hi,
>> >
>> >so no one has seen the problem of disabled systemd units be
50968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30
d for your user , or to use ACLs
(maybe with a find -exec ).
Still you got the option for '0777' , but then security will be just a word.
I think the first one is easier to implement.
Best Regards,
Strahil Nikolov
Community Meeting Calendar:
Schedule -
Every 2nd an
moved the data to
fresh volumes and everything is working.
Best Regards,
Strahil Nikolov
В понеделник, 20 април 2020 г., 17:02:46 Гринуич+3, Rinku Kothiya
написа:
Hi,
The Gluster community is pleased to announce the release of Gluster7.5
(packages available at [1]).
Release not
d the opposite on
the brick2 - then only metadata at the Arbiter level can show us which data is
good and which has to be fixed.
>
>On Sat, 25 Apr 2020 at 19:41, Strahil Nikolov
>wrote:
>
>> On April 25, 2020 9:00:30 AM GMT+03:00, David Cunningham <
>> dcunning...@voisonic
x27;t have a hunch on which patch would have caused an
>increase
>in logs!
>
>-Amar
>
>
>>
>> On Sat, May 2, 2020, 12:47 AM Strahil Nikolov
>> wrote:
>>
>>> On May 1, 2020 8:03:50 PM GMT+03:00, Artem Russakovskii <
>>> arch
>Gluster-users mailing list
>Gluster-users@gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users
Inode size 1024 is the recommended for Gluster used with Openstack (SWIFT) ,
so it shouldn't have any issues.
Best Regards,
Strahil Nikolov
Communit
На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier
написа:
>Strahil Nikolov writes:
>
>> On May 23, 2020 7:29:23 AM GMT+03:00, Olivier
> wrote:
>>>Hi,
>>>
>>>I have been struggling with NFS Ganesha: one gluster node with
>ganesha
>>>serving
I forgot to mention that you need to verify/set the VMware machines for
high-performance/low-lattency workload.
На 25 май 2020 г. 17:13:52 GMT+03:00, Strahil Nikolov
написа:
>
>
>На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier
> написа:
>>Strahil Nikolov writes:
>>
>
Also,
can you provide a ping between the nodes, so we get an idea of the lattency
between the nodes.
Also, I'm interested how much time it takes on the bricks to 'du'.
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 10:27:34 GMT+03:00, Karthik Subrahmanya
написа:
>Hi,
Hi Rafi,
I have a test oVirt 4.3.9 cluster with Gluster v7.5 on CentOS7.
Can you provide the rpms and I will try to test.
Also, please share the switch that disables this behaviour (in case something
goes wrong).
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 14:54:34 GMT+03:00, RAFI KC
Hey Rafi,
what do you mean with volume configuration and tree structure.
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 16:18:36 GMT+03:00, RAFI KC написа:
>Sure, I have back-ported the patch to release-7. Now I will see How I
>can build the rpms.
>
>On the other hand, if possibl
move away the file from the slave , does it fixes the
issue ?
Best Regards,
Strahil Nikolov
На 30 май 2020 г. 1:10:56 GMT+03:00, David Cunningham
написа:
>Hello,
>
>We're having an issue with a geo-replication process with unusually
>high
>CPU use and giving "En
article (it's for small files tuning, but describes
the options above):
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/small_file_performance_enhancements
Best Regards,
Strahil Nikolov
На 29 май 2020 г. 22:25:29 GMT+03:00, Qing Wang н
Yesterday another ovirt user hit the issue (or similar one) after gluster v6.6
to 6.8 upgrade.
I guess Adrian can provide the logs, so we check if it is the same issue or not.
Best Regards,
Strahil Nikolov
На 28 май 2020 г. 14:15:55 GMT+03:00, Alan Orth написа:
>We upgraded from 5.10 or 5
Hello Naranderan,
what OS are you using ? Do you have SELINUX in enforcing mode (verify via
'sestatus') ?
Best Regards,
Strahil Nikolov
В събота, 30 май 2020 г., 13:33:05 ч. Гринуич+3, Naranderan Ramakrishnan
написа:
Dear Developers/Users,
A geo-rep session of a sub-vo
ncy level 3 (8 +3)
12 bricks with redundancy level 4 (8 + 4)
In your case if 2 bricks fail - the volume will be available without any
disruption. Sadly there is no way to convert replicated to dispersed volume and
based on your workload dispersed volume might not be suitable.
Best Regards,
Stra
antly looping over some data causing the CPU hog.
Sadly, I can't find an instruction for increasing the log level of the geo rep
log .
Best Regards,
Strahil Nikolov
На 2 юни 2020 г. 6:14:46 GMT+03:00, David Cunningham
написа:
>Hi Strahil and Sunny,
>
>Thank you for the replies.
-slave logs.
Does the issue still occurs ?
Best Regards,
Strahil Nikolov
На 6 юни 2020 г. 1:21:55 GMT+03:00, David Cunningham
написа:
>Hi Sunny and Strahil,
>
>Thanks again for your responses. We don't have a lot of renaming
>activity -
>maybe some, but not a lot. We do have
on't forget the
'ionice' to reduce the pressure).
Once you have the list of files, stat them via the FUSE client and check if
they got healed.
I fully agree that you need to first heal the golumes before proceeding
further or you might get into a nasty situation.
Best Regards,
S
I'm using OS repos.
Have you checked for system/package manager proxy ?
Is there any difference between the nodes (for example some package)?
Best Regards,
Strahil Nikolov
На 8 юни 2020 г. 15:26:08 GMT+03:00, Hu Bert написа:
>Hi @ll,
>
>on 2 of 3 identical servers (hosts, resol
Hm... That's something I didn't expect.
By the way, have you checked if all clients are connected to all bricks (if
using FUSE)?
Maybe you have some clients that cannot reach a brick.
Best Regards,
Strahil Nikolov
На 8 юни 2020 г. 12:48:22 GMT+03:00, Hu Bert написа:
>Hi Strahi
one) .
As this script is python, I guess you can put some debug print statements
in it.
Best Regards,
Strahil Nikolov
На 9 юни 2020 г. 5:07:11 GMT+03:00, David Cunningham
написа:
>Hi Sankarshan,
>
>Thanks for that. So what should we look for to figure out what this
>process
&
27; can be used ?)
- Find (on all replica sets ) the file and check the gfid
- Check for heals pending for that gfid
Best Regards,
Strahil Nikolov
На 10 юни 2020 г. 6:37:35 GMT+03:00, David Cunningham
написа:
>Hi Strahil,
>
>Thank you for that. Do you know if these "Stale file
problem ?
Best Regards,
Strahil Nikolov
На 11 юни 2020 г. 3:15:36 GMT+03:00, David Cunningham
написа:
>Hi Strahil,
>
>Thanks for that. I did search for a file with the gfid in the name, on
>both
>the master nodes and geo-replication slave, but none of them had such a
>file
topics is discussing open bugs and issues
reported in the mailing list. It will be nice to join the meeting and discuss
that in audio, as there could be other devs willing to join the 'fight'.
@Sankarshan,
any idea how to enable debug on the python script ?
Best Regards,
Strahil Nikolo
Hi Ahemad,
You can simplify it by creating a systemd service that will call the script.
It was already mentioned in a previous thread (with example), so you can
just use it.
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 16:02:07 GMT+03:00, Hu Bert написа:
>Hi,
>
>if y
illall -HUP glusterfsd || /bin/true"
[Install]
WantedBy=multi-user.target
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 18:41:59 GMT+03:00, ahemad shaik
написа:
> Hi,
>I see there is a script file in below mentioned path in all nodes using
>which gluster volume
>c
s will also be killed both by the script and the
glusterfsd service.
Best Regards,
Strahil Nikolov
На 16 юни 2020 г. 19:48:32 GMT+03:00, ahemad shaik
написа:
> Hi Strahil,
>I have the gluster setup on centos 7 cluster.I see glusterfsd service
>and it is in inactive
Hey Mahdi,
For me it looks like Red Hat are focusing more on CEPH than on Gluster.
I hope the project remains active, cause it's very difficult to find a
Software-defined Storage as easy and as scalable as Gluster.
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 0:06:33 GMT+03:00,
d of comparison) - the issue rate
is not so big, but the peice for Gluster is not millions :)
Best Regards,
Strahil Nikolov
На 17 юни 2020 г. 19:15:00 GMT+03:00, Erik Jacobson
написа:
>> It is very hard to compare them because they are structurally very
>different. For exam
Best Regards,
Strahil Nikolov
На 18 юни 2020 г. 19:22:46 GMT+03:00, Computerisms Corporation
написа:
>Hi Gluster Gurus,
>
>Due to some hasty decisions and inadequate planning/testing, I find
>myself with a single-brick Distributed gluster volume. I had initially
>
>intended to ex
eplace-brick' .
>Note that I am not implying that Ceph is faster; rather, than a small
>Gluster setup with few brick can be slower than expected.
>
>I would love to ear other opinions and on-the-field experiences.
>Thanks.
>
>[1]
>https://lists.gluster.org/pipermail/glust
На 21 юни 2020 г. 10:53:10 GMT+03:00, Gionatan Danti
написа:
>Il 2020-06-21 01:26 Strahil Nikolov ha scritto:
>> The efforts are far less than reconstructing the disk of a VM from
>> CEPH. In gluster , just run a find on the brick searching for the
>> name of the VM d
D
>
>
>Hubert
>
>Am Mo., 8. Juni 2020 um 15:36 Uhr schrieb Strahil Nikolov
>:
>>
>> Hm... That's something I didn't expect.
>>
>>
>> By the way, have you checked if all clients are connected to all
>bricks (if using FUSE)?
>>
>> Ma
Hi Hubert,
keep in mind RH recommends disks of size 2-3 TB, not 10. I guess that has
changed the situation.
For NVMe/SSD - raid controller is pointless , so JBOD makes most sense.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 7:58:56 GMT+03:00, Hu Bert написа:
>Am So., 21. Juni 2020
much more sense for any kind of software defined storage (no matter
Gluster, CEPH or Lustre).
Of course, I could be wrong and I would be glad to read benchmark results on
this topic.
Best Regards,
Strahil Nikolov
На 22 юни 2020 г. 18:48:43 GMT+03:00, Erik Jacobson
написа:
>> For
What is the OS and it's version ?
I have seen similar behaviour (different workload) on RHEL 7.6 (and below).
Have you checked what processes are in 'R' or 'D' state on st2a ?
Best Regards,
Strahil Nikolov
На 23 юни 2020 г. 19:31:12 GMT+03:00, Pavel Znamensky
Hi Rinku,
can you tell me how the packages for CentOS 7 are build, as I had issues
yesterday bulding both latest and v7 branches ?
Best Regards,
Strahil Nikolov
На 24 юни 2020 г. 14:00:47 GMT+03:00, Rinku Kothiya
написа:
>Hi All,
>
>Release-8 RC0 packages are built. As this i
Hey Bob,
Are you going to build the rpms?
If yes, could you share your results.
For me building the rpms from the gluster source was easy /on CentOS8/, but on
CentOS7 I got errors.
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 4:22:29 GMT+03:00, Computerisms Corporation
написа
can test on another volume setting a bigger
shard size.
Best Regards,
Strahil Nikolov
На 29 юни 2020 г. 5:00:22 GMT+03:00, "wkm...@bneit.com"
написа:
>For many years, we have maintained a number of standalone,
>hyperconverged Gluster/Libvirt clusters Replica 2 + Arbiter usi
t the virt group options and try again.
Does the issue occur on another VM ?
Best Regards,
Strahil Nikolov
На 30 юни 2020 г. 1:59:36 GMT+03:00, Marco Fais написа:
>Hi,
>
>I am having a problem recently with Gluster disperse volumes and live
>merge
>on qemu-kvm.
>
>I am usi
На 30 юни 2020 г. 3:02:32 GMT+03:00, WK написа:
>
>On 6/28/2020 8:52 PM, Strahil Nikolov wrote:
>> Last time I did storhaug+NFS-Ganesha I used
>https://github.com/gluster/storhaug/wiki .
>
>Well, that certainly helps but since i have no experience with Samba, I
>
>
Also, not only Ganesha uses libgfapi - qemu can directly use it but it has somw
limitations.
Best Regards,
Strahil Nikolov
На 30 юни 2020 г. 11:29:49 GMT+03:00, "Felix Kölzow"
написа:
>Dear Users,
>
>
>> On this list I keep on seeing comments that VM performance
=network-online.target
I have created systemd mount units, due to VDO , but most probably the
local-fs.target will generate the mount units for you from the fstab.
Best Regards,
Strahil Nikolov
На 1 юли 2020 г. 20:57:22 GMT+03:00, "Felix Kölzow"
написа:
>Hey,
>
>
>what a
so use it for analysis of the logs.
Most probably the brick logs can provide useful information.
>
>> Check ovirt engine logs (on the HostedEngine VM or your standalone
>> engine) , vdsm logs on the host that was running the VM and next -
>check
>> the brick logs.
&
put - it won't be soon.
Best Regards,
Strahil Nikolov
На 2 юли 2020 г. 17:39:25 GMT+03:00, Shreyansh Shah
написа:
>Hi All,
>
>*We are facing "Mismatching layouts for ,gfid = "
>errors.*
>
>We have a distributed glusterfs 5.10, no replication, 2 bricks (4TB
&
Hi Felix,
It seems I missed your reply with the change log that Shwetha requested.
Best Regards,
Strahil Nikolov
На 3 юли 2020 г. 11:16:30 GMT+03:00, "Felix Kölzow"
написа:
>Dear Users,
>the geo-replication is still broken. This is not really a comfortable
>situation.
>
and then
create xfs with the necessary options to align it properly.
I have never used xfsdump to recover a brick. Just ensure gluster brick process
is not running on the node during the restore.
Best Regards,
Strahil Nikolov
На 6 юли 2020 г. 23:32:28 GMT+03:00, Shanon Swafford
написа:
>Hi
ts.c:1548:default_lookup_cbk] 0-stack-trace: stack-address:
>0x7f0dc007c428, SSD_Storage-disperse-5 returned -1 error: Stale file
>handle
>[Stale file handle]
>[2020-07-07 21:23:06.839835] D [MSGID: 0]
>[dht-common.c:998:dht_discover_cbk] 0-SSD_Storage-dht: lookup of (null)
>on
&
At least for EL 7 ,there are 2 modules for sosreport:
gluster & gluster_block
Best Regards,
Strahil Nikolov
На 8 юли 2020 г. 9:02:10 GMT+03:00, Artem Russakovskii
написа:
>I think it'd be extremely helpful if gluster had a feature to grab all
>the
>necessary logs/debug
same mechanism
mdadm is using -> it should be possible.
Best Regards,
Strahil Nikolov
На 29 юли 2020 г. 0:10:44 GMT+03:00, Darrell Budic
написа:
>ZFS isn’t that resource intensive, although it does like RAM.
>
>But why not just add additional bricks? Gluster is kind of built to use
>
LVM allows creating/converting striped/mirrored LVs without any dowtime and
it's using the md module.
Best Regards,
Strahil Nikolov
На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes
написа:
>Hi there
>
>'till now, I am using glusterfs over XFS and so far so g
I guess there is no automatic hot-spare replacement in LVM, but mdadm has
that functionality.
Best Regards,
Strahil Nikolov
На 30 юли 2020 г. 15:39:18 GMT+03:00, Gilberto Nunes
написа:
>Doing some research and the chvg command, which is responsable for
>create
>hotspare dis
1 - 100 of 354 matches
Mail list logo