Hi,
I am trying to make sense of the hash values that get assigned/used by
DHT.
/brick1/vol and /brick2/vol are the directories that are being used as
bricks in a distributed replicated volume.
[root@glusterhackervm3 glus]# getfattr -n trusted.glusterfs.dht -e hex
/brick1/vol
Here's an article explaining how dht works. The hash maps are per-directory.
https://joejulian.name/blog/dht-misses-are-expensive/
On 11/08/2016 11:04 AM, Ankireddypalle Reddy wrote:
Hi,
I am trying to make sense of the hash values that get
assigned/used by DHT.
/brick1/vol
We haven’t decided how the JBODS would be configured. They would likely be SAS
attached without a raid controller for improved performance. I run large ZFS
arrays this way, but only in single server NFS setups right now.
Mounting each hard drive as it’s own brick would probably give the most
Thanks for pointing to the article. I have been following the article all the
way. What intrigues me is the dht values associated with sub directories.
[root@glusterhackervm3 glus]# getfattr -n trusted.glusterfs.dht -e hex
/brick2/vol
getfattr: Removing leading '/' from absolute path names
#
> Thanks for pointing to the article. I have been following the article all the
> way. What intrigues me is the dht values associated with sub directories.
> [root@glusterhackervm3 glus]# getfattr -n trusted.glusterfs.dht -e hex
> /brick2/vol
> getfattr: Removing leading '/' from absolute path
Hello All,
I'd like to automate a gluster replica cluster in a way that
allows me to later add single servers on the fly. I can't seem to understand
from the docs how I might achieve this. It seems logical that a replica
cluster needs to have at least 2 servers initially, but
Replicas are defined in the order bricks are listed in the volume create
command. So gluster volume create myvol replica 2 server1:/data/brick1
server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will
replicate between server1 and server2 and replicate between server3 and
server4.
Apologies for delay in response, it took me a while to switch here.
As someone pointed rightly in the discussion above. The start and stop
of a VM via libvirt (virsh) will at least call 2
glfs_new/glfs_init/glfs_fini calls.
In fact there are 3 calls involved 2 (mostly for stat, read headers
and
Il 09 nov 2016 1:23 AM, "Joe Julian" ha scritto:
>
> Replicas are defined in the order bricks are listed in the volume create
command. So gluster volume create myvol replica 2 server1:/data/brick1
server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will
replicate
Hi Thomas,
thats a huge storage.
What I can say from my usecase - dont use Gluster directly if the files
are small. I dont know, if the file count matters, but if the files are
small (few KiB), Gluster takes ages to remove for example. Doing the
same in a VM with e.g. ext4 disk on the very same
You should be able to get the fix in 3.7.18. Rajesh has posted a fix
http://review.gluster.org/#/c/15798/ in release-3.7 branch.
On Tue, Nov 8, 2016 at 7:30 PM, ABHISHEK PALIWAL
wrote:
> Hi,
>
> I am getting the below message in file is flooded with these logs
>
>
>
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in
Hi all,
The minutes of today's meeting:
Meeting summary
agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
(Saravanakmr, 12:01:19)
Roll call (Saravanakmr, 12:01:28)
Next week’s meeting host (Saravanakmr, 12:05:25)
ACTION: skoduri will host bug triage meeting on
On 8/11/2016 9:58 PM, Thomas Wakefield wrote:
Still looking for use cases and opinions for Gluster in an education / HPC
environment. Thanks.
Sorry, whats a HPC environment?
--
Lindsay Mathieson
___
Gluster-users mailing list
On 8/11/2016 11:38 PM, Thomas Wakefield wrote:
High Performance Computing, we have a small cluster on campus of about 50 linux
compute servers.
D'oh! I should have thought of that.
Are you looking at replication (2 or 3)/disperse or pure disperse?
--
Lindsay Mathieson
On 8/11/2016 11:43 PM, Lindsay Mathieson wrote:
Are you looking at replication (2 or 3)/disperse or pure disperse?
I need to go to bed ...
I meant Distributed/Replicated :) or Distributed/Disperse
--
Lindsay Mathieson
___
Gluster-users mailing
Hi,
I am getting the below message in file is flooded with these logs
[2016-09-22 20:25:33.102737] I [dict.c:473:dict_get]
(-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x166)
[0x2ace40d9c816]
Still looking for use cases and opinions for Gluster in an education / HPC
environment. Thanks.
> On Nov 4, 2016, at 2:05 PM, Thomas Wakefield wrote:
>
> Everyone, thanks in advance.
>
> We are looking to add a large filesystem to our compute facility at GMU. We
> are
High Performance Computing, we have a small cluster on campus of about 50 linux
compute servers.
> On Nov 8, 2016, at 8:37 AM, Lindsay Mathieson
> wrote:
>
> On 8/11/2016 9:58 PM, Thomas Wakefield wrote:
>> Still looking for use cases and opinions for Gluster in
I think we are leaning towards erasure coding with 3 or 4 copies. But open to
suggestions.
> On Nov 8, 2016, at 8:43 AM, Lindsay Mathieson
> wrote:
>
> On 8/11/2016 11:38 PM, Thomas Wakefield wrote:
>> High Performance Computing, we have a small cluster on
20 matches
Mail list logo