sk can be attached locally? This will be much faster.
>
> Regards,
> Poornima
>
> On Tue, Apr 2, 2019, 12:05 AM Tom Fite wrote:
>
>> Hi all,
>>
>> I have a very large (65 TB) brick in a replica 2 volume that needs to be
>> re-copied from scratch. A
Hi all,
I have a very large (65 TB) brick in a replica 2 volume that needs to be
re-copied from scratch. A heal will take a very long time with performance
degradation on the volume so I investigated using rsync to do the brunt of
the work.
The command:
rsync -av -H -X --numeric-ids --progress
Hi all,
I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2
boxes, distributed-replicate) My testing shows the same thing -- running a
find on a directory dramatically increases lstat performance. To add
another clue, the performance degrades again after issuing a call to
from a later point in time. The issue is hit
> earlier than the logs what is available in the log. I need the logs
> from an earlier time.
> And along with the entire tier logs, can you send the glusterd and
> brick logs too?
>
> Rest of the comments are inline
>
> On Wed, J
, this
avoids the stat() calls on spinning disk, and throughput is much faster for
new file creation, especially for small files.
-Tom
On Tue, Dec 5, 2017 at 2:14 PM, Tom Fite <tomf...@gmail.com> wrote:
> Hi all,
>
> I have a distributed / replicated pool consisting of 2 boxes, with 3
&
to
update the access count on files are blocked?
On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite <tomf...@gmail.com> wrote:
> The sizes of the files are extremely varied, there are millions of small
> (<1 MB) files and thousands of files larger than 1 GB.
>
> Attached is the
I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3
bricks per server distributed replicated volume.
I'm seeing IO get blocked across all client FUSE threads for 10 to 15
seconds while the promotion daemon runs. I see the 'glustertierpro' thread
jump to 99% CPU usage on
21, 2017 at 11:44 AM, Tom Fite <tomf...@gmail.com> wrote:
> Sure!
>
> > 1 - output of gluster volume heal info
>
> Brick pod-sjc1-gluster1:/data/brick1/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster2:/data/brick1/gv0
> Status: Con
I'm seeing a similar issue. It appears that df on the client is reporting
the *brick* size instead of the total pool size. I think that this started
happening after one of my servers crashed due to a hardware issue.
I'm running a distributed / replicated volume, 2 boxes with 3 bricks each,
Hi all,
I have a distributed / replicated pool consisting of 2 boxes, with 3 bricks
a piece. Each brick is mounted via a RAID 6 array consisting of 11 6 TB
disks. I'm running CentOS 7 with XFS and LVM. The 150 TB pool is loaded
with about 15 TB of data. Clients are connected via FUSE. I'm using
Hi all,
I've hit a strange problem with geo-replication.
On gluster 3.10.1, I have set up geo replication between my replicated /
distributed instance and a remote replicated / distributed instance. The
master and slave instances are connected via VPN. Initially the
geo-replication setup was
11 matches
Mail list logo