r.org/mailman/listinfo/gluster-users
--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https
of the data brick and convert that
into a replicated image and then enable replication from the time of the
snapshot?
--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
Community Meeting
.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30
I would just like to confirm something.
When I write a block of data to a replicated gluster volume the client
sends the data to all the replicating servers.
but
When I read a block of data from that cluster of servers the client only
reads data from a single replicating server.
--
Alvin
.
--
Respectfully
Mahdi
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Alvin Starr
ith spaces in their names
--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com
On 10/15/20 9:20 AM, Yaniv Kaul wrote:
On Thu, Oct 15, 2020 at 4:04 PM Alvin Starr <mailto:al...@netvel.net>> wrote:
We are running glusterfs-server-3.8.9-1.el7.x86_64
This was released >3.5 years ago. Any plans to upgrade?
Y.
Yes.
The sad thing is that these systems we
and fix
the issue.
Note: Please hide/mask hostname/Ip or any other confidential
information in above output.
---
Ashish
*From: *"Alvin Starr"
*To: *"gluster-user"
*Sent: *Wednesday, October 14, 2020
heal.
Is there any way to manually correct the problem?
--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST
r-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
Community Meeting Calend
and improve the code.
--
Alvin Starr || land: (647)478-6285
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https
gluster.org/mailman/listinfo/gluster-users
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
___
Gluster-users mailing list
Gluster-users@gluster.org
https:
confident in the consistency of the filesystem.
I will try running a bit-rot scan against the system to see if there are
any errors.
On 3/26/19 11:45 AM, Sankarshan Mukhopadhyay wrote:
On Tue, Mar 26, 2019 at 6:10 PM Alvin Starr wrote:
After almost a week of doing nothing the brick failed and we
ake some time.
But 'Days/weeks/month' is not a expected/observed behavior. Is there
any logs in the log file? If not, can you do a 'strace -f' to the pid
which is consuming major CPU?? (strace for 1 mins sample is good enough).
-Amar
On Wed, Mar 20, 2019 at 2:05 AM Alvin Starr <mai
ot but its hard to say and it looks like it may run for
hours/days/months
Will Gluster take a long time with Lots of little files to resync?
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
al...@
uring those years) did your NICs break?
Over the years(30) I have had problems with bad ports on switches.
With some manufactures being worse than others.
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
al...@netve
mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
al...@netvel.net ||
___
Gluster-users
ting.
[2018-07-30 11:10:12.192742] I [repce(agent):92:service_loop]
RepceServer: terminating on reaching EOF.
[2018-07-30 11:10:12.193228] I [syncdutils(agent):220:finalize] :
exiting.
[2018-07-30 11:10:13.142468] I [monitor(monitor):344:monitor] Monitor:
worker(/bricks/cc_us/d
are
>> > complying to the Idrac enterprise theory. Anyone else had
this issue?!
>> >
>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@g
outside of the VPN.
On 02/07/2018 11:36 PM, Kotresh Hiremath Ravishankar wrote:
Ccing glusterd team for information
On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <al...@netvel.net
<mailto:al...@netvel.net>> wrote:
That makes for an interesting problem.
I cannot ope
management
communication
happens via RPC.
Thanks,
Kotresh HR
On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <al...@netvel.net
<mailto:al...@netvel.net>> wrote:
I am running gluster 3.8.9 and trying to setup a geo-replicated
volume over ssh,
It looks like the volume cr
I am running gluster 3.8.9 and trying to setup a geo-replicated volume
over ssh,
It looks like the volume create command is trying to directly access the
server over port 24007.
The docs imply that all communications are over ssh.
What am I missing?
--
Alvin Starr
I have a volume with a few million files that I would like to move from
striped to something else.
Is there anyway to convert the volume short of using rsync to a new volume?
--
Alvin Starr || land: (905)513-7688
Netvel Inc. || Cell: (416)806-0133
al
luster has made some data
> loss or data corruption.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
--
Alvin Starr || voice: (905)513-7688
Netvel
84 />/ref_count=1 />/translator=fuse />/complete=0 />//
Mos probably, issue is with write-behind's flush. So please turn off
write-behind and test. If you don't have any hung httpd processes, please
let us know.
-Amar
>/-Amar />//>/On Wed, Mar 29, 2017 a
sh_resume
/>/wind_to=FIRST_CHILD(this)->fops->flush />/unwind_to=fuse_err_cbk />//>/[global.callpool.stack.61.frame.5] />/frame=0x7f6c6f796684 />/ref_count=1 />/translator=fuse />/complete=0 />//
Mos probably, issue is with write-behind's flush. So please turn off
glusterfsd and the hung pids along with the offending pid's stacks from
/proc/{pid}/stack.
This has been a low level annoyance for a while but it has become a much
bigger issue because the number of hung processes went from a few a week
to a few hundred a day.
--
Alvin Starr
@gluster.org
<mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
___
Gluster-users mailing list
Gluste
28 matches
Mail list logo