Re: [Gluster-users] Indexing/Importing existing files on disk

2022-04-02 Thread Patrick Nixon
Hi!
Thank you for the response
The Volume I want to import the files into is a distributed volume with 9
bricks. (more info at the bottom).
It seems I failed to include the referenced page (
https://rosenha.in/blog/2016/10/migrating-to-gluster-with-existing-data/)
in my last email, apologies for that.

Volume Name: sfs
Type: Distribute
Volume ID: 9f65aaa5-86cf-40ae-a447-506f7751a8ea
Status: Started
Snapshot Count: 0
Number of Bricks: 9
Transport-type: tcp
Bricks:
Brick1: sfs02:/data/brick1/gv0
Brick2: sfs01:/data/brick1/gv0
Brick3: sfs03:/data/brick1/gv1
Brick4: sfs04:/data/brick1/gv1
Brick5: sfs05:/data/brick1/gv1
Brick6: sfs06:/data/brick1/gv1
Brick7: sfs07:/data/brick1/gv1
Brick8: sfs08:/data/brick1/gv1
Brick9: sfs09:/data/brick1/gv1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

On Sat, Apr 2, 2022 at 7:35 AM Strahil Nikolov 
wrote:

> In order to help you, the community needs the volume info.
>
> Best Regards,
> Strahil Nikolov
>
> On Fri, Apr 1, 2022 at 3:38, Patrick Nixon
>  wrote:
> Hey all,
>  I'm running Gluster 7.2 (there doesn't seem to be a newer version for my
> arch armhf).
>  I'm trying to add a disk as a brick to a volume and be able to
> import/load the existing files in without having to bulk copy them in a
> roundabout fashion.
>
>  I found this page that indicates just trying to access the file(s) should
> make them show up once the brick is added with the force tag.
>
>  Is this still true?   Suggestions on what to try to accomplish this if
> the article is no longer relevant?
>
> thank you!
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Indexing/Importing existing files on disk

2022-03-31 Thread Patrick Nixon
Hey all,
 I'm running Gluster 7.2 (there doesn't seem to be a newer version for my
arch armhf).
 I'm trying to add a disk as a brick to a volume and be able to import/load
the existing files in without having to bulk copy them in a roundabout
fashion.

 I found this page that indicates just trying to access the file(s) should
make them show up once the brick is added with the force tag.

 Is this still true?   Suggestions on what to try to accomplish this if the
article is no longer relevant?

thank you!




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Brick layout question

2021-11-27 Thread Patrick Nixon
Hello Glusters!

I've been running a multi-node single brick per node distributed array with
the bricks being between 6 and 14TB each and getting okay performance.

I was reading some documentation and saw distributed dispersed as an option
and was considering setting up a test array to see if that improved the
performance.I don't need replicas / redundancy at all for this array,
just bulk storage.

My question, primarily, is about how to layout the bricks across six nodes
with the ability to add additional nodes/drives as necessary.
Option 1:
Single Brick Per Node

Option 2:
Multiple Bricks Per Node
- Bricks a consistent size (1T each, left over disk as it's own brick)
- Bricks a fraction of the total disk (1/4 or 1/2)

Thank you for any suggestions/tips (links to additional documentation that
would help educate me are welcome as well).




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Ansible Resources

2021-04-03 Thread Patrick Nixon
I'm looking into setting up ansible to build and maintain a couple of my
clusters.

I see several resources online and was wondering which ansible role /
collection is considered the most featureful and stable.

Ansible's built in gluster_volume  but that doesn't install the client or
anything
https://github.com/geerlingguy/ansible-for-devops/tree/master/gluster
seems to do everything but is 8 months since last update
https://github.com/gluster/gluster-ansible doesn't seem to install the
necessary packages

My clusters are pretty simple.   Distribute setups with some basic options
and mounted from a couple of servers.

Thanks for any guidance!




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Patrick Nixon
Thanks for the follow up.  After reviewing the logs Vijay mentioned,
nothing useful was found.

I wiped removed and wiped the brick tonight.   I'm in the process of
balancing the new brick and will resync the files onto the full gluster
volume when that completes

On Wed, Feb 13, 2019, 10:28 PM Nithya Balachandran 
wrote:

>
>
> On Tue, 12 Feb 2019 at 08:30, Patrick Nixon  wrote:
>
>> The files are being written to via the glusterfs mount (and read on the
>> same client and a different client). I try not to do anything on the nodes
>> directly because I understand that can cause weirdness.   As far as I can
>> tell, there haven't been any network disconnections, but I'll review the
>> client log to see if there any indication.   I don't recall any issues last
>> time I was in there.
>>
>>
> If I understand correctly, the files are written to the volume from the
> client , but when the same client tries to list them again, those entries
> are not listed. Is that right?
>
> Do the files exist on the bricks?
> Would you be willing to provide a tcpdump of the client when doing this?
> If yes, please do the following:
>
> On the client system:
>
>- tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
>- Copy the files to the volume using the client
>- List the contents of the directory in which the files should exist
>- Stop the tcpdump capture and send it to us.
>
>
> Also provide the name of the directory and the missing files.
>
> Regards,
> NIthya
>
>
>
>
>
>> Thanks for the response!
>>
>> On Mon, Feb 11, 2019 at 7:35 PM Vijay Bellur  wrote:
>>
>>>
>>>
>>> On Sun, Feb 10, 2019 at 5:20 PM Patrick Nixon  wrote:
>>>
>>>> Hello!
>>>>
>>>> I have an 8 node distribute volume setup.   I have one node that accept
>>>> files and stores them on disk, but when doing an ls, none of the files on
>>>> that specific node are being returned.
>>>>
>>>>  Can someone give some guidance on what should be the best place to
>>>> start troubleshooting this?
>>>>
>>>
>>>
>>> Are the files being written from a glusterfs mount? If so, it might be
>>> worth checking if the network connectivity is fine between the client (that
>>> does ls) and the server/brick that contains these files. You could look up
>>> the client log file to check if there are any messages related to
>>> rpc disconnections.
>>>
>>> Regards,
>>> Vijay
>>>
>>>
>>>> # gluster volume info
>>>>
>>>> Volume Name: gfs
>>>> Type: Distribute
>>>> Volume ID: 44c8c4f1-2dfb-4c03-9bca-d1ae4f314a78
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 8
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: gfs01:/data/brick1/gv0
>>>> Brick2: gfs02:/data/brick1/gv0
>>>> Brick3: gfs03:/data/brick1/gv0
>>>> Brick4: gfs05:/data/brick1/gv0
>>>> Brick5: gfs06:/data/brick1/gv0
>>>> Brick6: gfs07:/data/brick1/gv0
>>>> Brick7: gfs08:/data/brick1/gv0
>>>> Brick8: gfs04:/data/brick1/gv0
>>>> Options Reconfigured:
>>>> cluster.min-free-disk: 10%
>>>> nfs.disable: on
>>>> performance.readdir-ahead: on
>>>>
>>>> # gluster peer status
>>>> Number of Peers: 7
>>>> Hostname: gfs03
>>>> Uuid: 4a2d4deb-f8dd-49fc-a2ab-74e39dc25e20
>>>> State: Peer in Cluster (Connected)
>>>> Hostname: gfs08
>>>> Uuid: 17705b3a-ed6f-4123-8e2e-4dc5ab6d807d
>>>> State: Peer in Cluster (Connected)
>>>> Hostname: gfs07
>>>> Uuid: dd699f55-1a27-4e51-b864-b4600d630732
>>>> State: Peer in Cluster (Connected)
>>>> Hostname: gfs06
>>>> Uuid: 8eb2a965-2c1e-4a64-b5b5-b7b7136ddede
>>>> State: Peer in Cluster (Connected)
>>>> Hostname: gfs04
>>>> Uuid: cd866191-f767-40d0-bf7b-81ca0bc032b7
>>>> State: Peer in Cluster (Connected)
>>>> Hostname: gfs02
>>>> Uuid: 6864c6ac-6ff4-423a-ae3c-f5fd25621851
>>>> State: Peer in Cluster (Connected)
>>>> Hostname: gfs05
>>>> Uuid: dcecb55a-87b8-4441-ab09-b52e485e5f62
>>>> State: Peer in Cluster (Connected)
>>>>
>>>> All gluster nodes are running glusterfs 4.0.2
>>>> The clients accessing the files are also running glusterfs 4.0.2
>>>> Both are Ubuntu
>>>>
>>>> Thanks!
>>>> ___
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-11 Thread Patrick Nixon
The files are being written to via the glusterfs mount (and read on the
same client and a different client). I try not to do anything on the nodes
directly because I understand that can cause weirdness.   As far as I can
tell, there haven't been any network disconnections, but I'll review the
client log to see if there any indication.   I don't recall any issues last
time I was in there.

Thanks for the response!

On Mon, Feb 11, 2019 at 7:35 PM Vijay Bellur  wrote:

>
>
> On Sun, Feb 10, 2019 at 5:20 PM Patrick Nixon  wrote:
>
>> Hello!
>>
>> I have an 8 node distribute volume setup.   I have one node that accept
>> files and stores them on disk, but when doing an ls, none of the files on
>> that specific node are being returned.
>>
>>  Can someone give some guidance on what should be the best place to start
>> troubleshooting this?
>>
>
>
> Are the files being written from a glusterfs mount? If so, it might be
> worth checking if the network connectivity is fine between the client (that
> does ls) and the server/brick that contains these files. You could look up
> the client log file to check if there are any messages related to
> rpc disconnections.
>
> Regards,
> Vijay
>
>
>> # gluster volume info
>>
>> Volume Name: gfs
>> Type: Distribute
>> Volume ID: 44c8c4f1-2dfb-4c03-9bca-d1ae4f314a78
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 8
>> Transport-type: tcp
>> Bricks:
>> Brick1: gfs01:/data/brick1/gv0
>> Brick2: gfs02:/data/brick1/gv0
>> Brick3: gfs03:/data/brick1/gv0
>> Brick4: gfs05:/data/brick1/gv0
>> Brick5: gfs06:/data/brick1/gv0
>> Brick6: gfs07:/data/brick1/gv0
>> Brick7: gfs08:/data/brick1/gv0
>> Brick8: gfs04:/data/brick1/gv0
>> Options Reconfigured:
>> cluster.min-free-disk: 10%
>> nfs.disable: on
>> performance.readdir-ahead: on
>>
>> # gluster peer status
>> Number of Peers: 7
>> Hostname: gfs03
>> Uuid: 4a2d4deb-f8dd-49fc-a2ab-74e39dc25e20
>> State: Peer in Cluster (Connected)
>> Hostname: gfs08
>> Uuid: 17705b3a-ed6f-4123-8e2e-4dc5ab6d807d
>> State: Peer in Cluster (Connected)
>> Hostname: gfs07
>> Uuid: dd699f55-1a27-4e51-b864-b4600d630732
>> State: Peer in Cluster (Connected)
>> Hostname: gfs06
>> Uuid: 8eb2a965-2c1e-4a64-b5b5-b7b7136ddede
>> State: Peer in Cluster (Connected)
>> Hostname: gfs04
>> Uuid: cd866191-f767-40d0-bf7b-81ca0bc032b7
>> State: Peer in Cluster (Connected)
>> Hostname: gfs02
>> Uuid: 6864c6ac-6ff4-423a-ae3c-f5fd25621851
>> State: Peer in Cluster (Connected)
>> Hostname: gfs05
>> Uuid: dcecb55a-87b8-4441-ab09-b52e485e5f62
>> State: Peer in Cluster (Connected)
>>
>> All gluster nodes are running glusterfs 4.0.2
>> The clients accessing the files are also running glusterfs 4.0.2
>> Both are Ubuntu
>>
>> Thanks!
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Files on Brick not showing up in ls command

2019-02-10 Thread Patrick Nixon
Hello!

I have an 8 node distribute volume setup.   I have one node that accept
files and stores them on disk, but when doing an ls, none of the files on
that specific node are being returned.

 Can someone give some guidance on what should be the best place to start
troubleshooting this?

# gluster volume info

Volume Name: gfs
Type: Distribute
Volume ID: 44c8c4f1-2dfb-4c03-9bca-d1ae4f314a78
Status: Started
Snapshot Count: 0
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: gfs01:/data/brick1/gv0
Brick2: gfs02:/data/brick1/gv0
Brick3: gfs03:/data/brick1/gv0
Brick4: gfs05:/data/brick1/gv0
Brick5: gfs06:/data/brick1/gv0
Brick6: gfs07:/data/brick1/gv0
Brick7: gfs08:/data/brick1/gv0
Brick8: gfs04:/data/brick1/gv0
Options Reconfigured:
cluster.min-free-disk: 10%
nfs.disable: on
performance.readdir-ahead: on

# gluster peer status
Number of Peers: 7
Hostname: gfs03
Uuid: 4a2d4deb-f8dd-49fc-a2ab-74e39dc25e20
State: Peer in Cluster (Connected)
Hostname: gfs08
Uuid: 17705b3a-ed6f-4123-8e2e-4dc5ab6d807d
State: Peer in Cluster (Connected)
Hostname: gfs07
Uuid: dd699f55-1a27-4e51-b864-b4600d630732
State: Peer in Cluster (Connected)
Hostname: gfs06
Uuid: 8eb2a965-2c1e-4a64-b5b5-b7b7136ddede
State: Peer in Cluster (Connected)
Hostname: gfs04
Uuid: cd866191-f767-40d0-bf7b-81ca0bc032b7
State: Peer in Cluster (Connected)
Hostname: gfs02
Uuid: 6864c6ac-6ff4-423a-ae3c-f5fd25621851
State: Peer in Cluster (Connected)
Hostname: gfs05
Uuid: dcecb55a-87b8-4441-ab09-b52e485e5f62
State: Peer in Cluster (Connected)

All gluster nodes are running glusterfs 4.0.2
The clients accessing the files are also running glusterfs 4.0.2
Both are Ubuntu

Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users