Is there any update ??
On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL
wrote:
> Hi,
>
> We are currently experiencing a serious issue w.r.t volume space usage by
> glusterfs.
>
> In the below outputs, we can see that the size of the real data in /c
> (glusterfs volume) is nearly 1GB but the “.
On 6 April 2017 at 14:56, Amudhan P wrote:
> Hi,
>
> I was able to add bricks to the volume successfully.
> Client was reading, writing and listing data from mount point.
> But after adding bricks I had issues in folder listing (not listing all
> folders or returning empty folder list) and write
Amazing, thanks Amar for creating the issue!
Cheers,
M.
Original Message
Subject: Re: [Gluster-users] Fw: Deletion of old CHANGELOG files in
.glusterfs/changelogs
Local Time: April 6, 2017 4:08 AM
UTC Time: April 6, 2017 2:08 AM
From: atumb...@redhat.com
To: mabi
Mohammed Rafi
We are planning to use GlusterFS for petabyte scale data storage and
processing.
Client already has a Luster cluster installed and running for few years.
We would like to know if any benchmarks are available for comparison of
GlusterFS vs Luster.
Some metrics will help us choose the faster option.
On Thu, Apr 6, 2017 at 7:09 AM, Holger Rojahn wrote:
> Hi,
>
> i ask a question several Days ago ...
> in short: Is it Possible to have 5 Bricks with Sharding enabled and
> replica count 3 to ensure that all files are on 3 of 5 bricks ?
> Manual says i can add any count of bricks to shared volume
Il 06/04/2017 14:08, Pavel Szalbot ha scritto:
> Hi, does your stack support MC-LAG (multichassis link aggregation
> group)? If so, configure LAGs with interfaces on both switches and you
> should be fine.
Yes it does, and I thought on using it too, but then all the ports
should be redundant (not
Hi,
If you want replica 3, you must have a multiple of 3 bricks.
So no, you can't use 5 bricks for a replica 3, that's one of the
things gluster can't do unfortunatly.
On Thu, Apr 06, 2017 at 01:09:32PM +0200, Holger Rojahn wrote:
> Hi,
>
> i ask a question several Days ago ...
> in short: Is i
Hi, does your stack support MC-LAG (multichassis link aggregation
group)? If so, configure LAGs with interfaces on both switches and you
should be fine.
-ps
On Thu, Apr 6, 2017 at 12:31 PM, Alessandro Briosi wrote:
> Hi all,
> a feature that would really be nice is network redundance.
>
> In a c
Hi,
i ask a question several Days ago ...
in short: Is it Possible to have 5 Bricks with Sharding enabled and
replica count 3 to ensure that all files are on 3 of 5 bricks ?
Manual says i can add any count of bricks to shared volumes but when i
test it with replica count 2 i can use 4 bricks bu
Hi all,
a feature that would really be nice is network redundance.
In a current installation I have 3 servers with 2 switches (staked with
10gb uplink). The servers contain VM.
I have configured the hosts to comunicate through the first switch, but
what happens is the switch goes down? Probably
Few questions inline:
On Tue, Apr 4, 2017 at 2:36 PM, Tanmay Mohapatra
wrote:
> Hi,
>
> We are planning to use GlusterFS for an application that can potentially
> create about 100,000 volumes (each of quite small capacity).
> At any time, only about a 500 of them could be mounted across multiple
Hi,
I was able to add bricks to the volume successfully.
Client was reading, writing and listing data from mount point.
But after adding bricks I had issues in folder listing (not listing all
folders or returning empty folder list) and write was interrupted.
remounting volume has solved the issue
Hello Darrell,
I user the GlusterFS mount in oivrt,
during the test 1 the Ovirt Cluster Mount the Storage as GlusterFS, locks like
this on the Ovirt Node.
10.2.9.135:vol01 3% /rhev/data-center/mnt/glusterSD/10.2.9.135:vol01
And I test the dd direct under the mount point directory, mean the t
Hi,
We are currently experiencing a serious issue w.r.t volume space usage by
glusterfs.
In the below outputs, we can see that the size of the real data in /c
(glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
14 matches
Mail list logo