I was in process of redoing underlying disk layout for a brick. triggered
full heal. then realized I had skipped a step of applying zfs set xattr=sa
which is kind of important running zfs under linux.
Rather than wait however many hours until my TB of data heals is their a
command in 3.8 to
Switch wise, have a look at the HP FlexFabric 5700-32XGT-8XG-2QSFP+ & Cisco
SG550XG-24T.
For what it's worth, you can minimise your bandwidth whilst maintaining
quorum if you use arbiters.
https://gluster.readthedocs.io/en/latest/Administrator%
20Guide/arbiter-volumes-and-quorum/
On 26 August
Servers now also come with the copper 10Gbit network adapters built in the
motherboard (Dell R730, supermicro, etc). But for those that do not, I have
used the Intel X540-T2 adapters with Centos 7 and RHEL7.
As for switches, our infrastructure uses expensive Cisco 9XXX series and
FEX expanders,
Prices seem to be dropping online at NewEgg etc and going from 2 nodes
to 3 nodes for a quorum implies a lot more traffic than would be
comfortable with 1G.
Any NIC/Switch recommendations for RH/Cent 7.x and Ubuntu 16?
-wk
___
Gluster-users
If there is interest I may give a (short) talk
"the life of a consultant listed on gluster.org/support"
about the use cases that we met in the last two years.
2016-08-12 21:48 GMT+02:00 Vijay Bellur :
> Hey All,
>
> Gluster Developer Summit 2016 is fast approaching [1] on
Hi,
Could someone please share some advice on tuning / settings for, 4 Gluster
servers with 300+ hosts connecting to it?
Listed below is how I'm setup..
4 X Intel 2.4Ghz 12 core servers with 64Gb memory each and Ubuntu 14.04.
3 bricks in each server created from Raid6 arrays of
On Fri, Aug 26, 2016 at 3:01 PM, Piotr Rybicki wrote:
>
>
> W dniu 2016-08-25 o 23:22, Joe Julian pisze:
>
>> I don't think "unfortunatelly with no attraction from developers" is
>> fair. Most of the leaks that have been reported against 3.7 have been
>> fixed
Hi,
I have now tried on a test fs, to see if I could recreate. First a bit
more info.
OS: CentOS 6.8
Gluster: 3.7.13 using centos-release-gluster37
Underlying FS (1st report): ZFS 0.6.5.7
Underlying FS (test): EXT4
# gluster vol info test
Volume Name: test
Type: Distribute
Volume ID:
W dniu 2016-08-25 o 23:22, Joe Julian pisze:
I don't think "unfortunatelly with no attraction from developers" is
fair. Most of the leaks that have been reported against 3.7 have been
fixed recently. Clearly, with 132 contributors, not all of them can, or
should, work on fixing bugs. New
Hi,
I would like to second this request.
I have Gluster 3.6.6 (which is due for upgrade real soon now) and 3.6.9 in use
under Debian 7 "Wheezy" which itself is LTS until April 2018.
Whilst I plan to upgrade to 3.7 (or even 3.8) at some point; with the needing
to book downtime etc to
Hi
Both volumes is entire different disk, which I use ZFS pool...
On Aug 25, 2016 11:57 PM, "Ted Miller" wrote:
> On 08/25/2016 08:11 AM, Gilberto Nunes wrote:
>
> Hello list
>
> I have two volumes, DATA and WORK.
>
> DATA has size 500 GB
> WORK has size 1,2 TB
>
> I can
11 matches
Mail list logo