ammed Rafi K C" <rkavu...@redhat.com>
| To: "Jon Cope" <jc...@redhat.com>, gluster-users@gluster.org
| Sent: Wednesday, November 8, 2017 3:34:07 AM
| Subject: Re: [Gluster-users] Enabling Halo sets volume RO
___
Gluster-
=131072)
Thanks in advace,
-Jon
Setup info
CentOS Linux release 7.4.1708 (Core)
4 GCE Instances (2 US, 2 Asia)
1 10gb Brick/Instance
replica 4 volume
Packages:
glusterfs-client-xlators-3.12.1-2.el7.x86_64
glusterfs-cli-3.12.1-2.el7.x86_64
python2-gluster-3.12.1-2.el7.x86_64
glusterfs
We have a v3.6.5 two node cluster with a distributed-replicate volume (2x2
bricks, everything formatted with ext4 on CentOS 6.6) which regularly omits
some files from directory listings on the client side, and also regularly
duplicates the listing of some other files.
Summary of the issue and
Hello all, I have an 8 node, replicated (4 x 2) volume that has a missing node.
It fell out of the cluster a few weeks ago and since then I've not been able to
bring it back on-line without killing performance to the volume. After my
initial attempts to bring the node back online failed I tried
Hello all, I’m having a problem with one of my Gluster volumes and would
appreciate some help. My setup is an 8-node cluster set up as 4x2 replication,
with 20TB per node for 88TB total. OS is CentOS 7.1, there is 1 20 TB brick per
node on its own XFS partition, separate from the OS. A few
reason to use a full CRM for this versus a simple VIP in something like
keepalived?
Good luck, and let us know how you get on!
Regards,
Jon Heese
From: gluster-users-boun...@gluster.org gluster-users-boun...@gluster.org on
behalf of Justin Chin-You justin.chin
Okay, I wasn't reading carefully... Try this:
dd if=/dev/zero of=disk3 count=1 bs=1 skip=50G
That will give you a 50GB thin-provisioned file for iSCSI.
Regards,
Jon Heese
On Apr 13, 2015, at 7:02 PM, Jon Heese
jonhe...@jonheese.commailto:jonhe...@jonheese.com wrote:
Cong,
Try adding skip
Cong,
Try adding skip=2G (I think) to your dd command and change the bs and count
both to 1. This will essentially thin-provision your iSCSI volume file.
I use this method to make iSCSI volumes that live on gluster.
Regards,
Jon Heese
On Apr 13, 2015, at 5:23 PM, Yue, Cong
cong_
for EL7-based OSes.
Do I have to build the module myself for tgtd on CentOS 6? If so, do
you have instructions to do so? Thanks.
Regards,
Jon Heese
On 4/1/2015 4:21 PM, Dan Lambright wrote:
incidentally , for all you iSCSI on gluster fans.. gluster has a plugin to
LIO and the target daemon
go this route, be sure to configure the iSCSI initiator(s) multipath to
be active/passive (or similar) as my testing with round-robin produced very
poor performance and data corruption.
Regards,
Jon Heese
From: gluster-users-boun...@gluster.org gluster
used
128KB pages in the cache?
Thanks again.
Regards,
Jon Heese
On Mar 12, 2015, at 1:20 PM, Anand Avati
av...@gluster.orgmailto:av...@gluster.org wrote:
The cache works by remembering 128KB pages within files. Effectively blocks
in your terminology.
Thanks
On Wed, 11 Mar 2015 at 12:36 Jon
any of this is supposed to work, please feel
free to correct me. Thanks in advance!
Regards,
Jon Heese
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Many thanks, i dont understand how it fixed, but is working now.
Greetings Atin
2014-12-23 4:55 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
On 12/22/2014 09:38 PM, Jon Colás Gómez wrote:
i see this in host2
[2014-12-22 16:06:19.126216] I
[glusterd-handler.c:448
:gf_cli3_1_create_volume_cbk] 0-cli: Received resp to
create volume
[2014-12-22 12:34:32.28296] I [input.c:46:cli_batch] 0-: Exiting with: 1
thanks!!
--
Jon Colás Gómez
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman
:glusterd_op_unlock_send_resp] 0-glusterd:
Responded to unlock, ret: 0
Greetings,
2014-12-22 16:35 GMT+01:00 Atin Mukherjee amukh...@redhat.com:
Could you provide the log snippet of host2 machine?
Did you use '*' in the brick path, if so then thats not correct.
~Atin
On 12/22/2014 06:57 PM, Jon Colás Gómez
and MerryXmas ;-P
2014-12-17 7:05 GMT+01:00 Kaushal M kshlms...@gmail.com:
Hey Jon,
What version of GlusterFS are you using? The ability to change a
volumes replica count was introduced in version 3.3.
~kaushal
On Wed, Dec 17, 2014 at 10:52 AM, Atin Mukherjee amukh...@redhat.com
wrote
I have a production enviroment with a volume with two replicated nodes in
replica 2
I want to update replica count from 2 -3 (add another node)
i have tried:
# gluster volume add-brick gluster_data replica 3 host03:/data/glusterfs
wrong brick type: replica, use HOSTNAME:export-dir-abs-path
#
Hello, I was wondering if there has been any progress on reproducing this error
or if there is any more info I can provide.
Thanks, Jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster
Hello, thank you for your reply.
I tried disabling the stat-prefetch parameter, it had no affect.Something to
note, writes to this volume from other Windows clients are working correctly
and not having case problems.
[prog@xvm-10-66 ~]$ testparm -v | grep case default case = lower
Hello, I first posted this question to the IRC but it was suggested I also post
here for better visibility. I am currently having a bit of trouble with
Gluster, Samba, and the VFS plug-in between them. I am also posting to the
Samba mailing list.
The behavior I am seeing is that by using the
Any ideas on release date of the RPMs which contain this fix?
Thanks
Jon
On 25/07/14 08:59, Poornima Gurusiddaiah wrote:
Hi Jon,
I believe the bug is fixed as a part of patch
http://review.gluster.org/#/c/8374/.
But this patch(fix) is not in glusterfs-api-3.5.1-1.el6.x86_64, I have posted
I'll wait patiently for release then..
Thanks!
Jon
On 2014-07-25 08:59, Poornima Gurusiddaiah wrote:
Hi Jon,
I believe the bug is fixed as a part of patch
http://review.gluster.org/#/c/8374/.
But this patch(fix) is not in glusterfs-api-3.5.1-1.el6.x86_64, I have
posted the same for 3.5-2
= glusterfs
glusterfs:volume = testvol
glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
glusterfs:loglevel = 7
On 22/07/14 14:26, Lalatendu Mohanty wrote:
On 07/21/2014 02:33 PM, Jon Archer wrote:
Hi Lala,
Thanks for your response (here and on your blog), I did try removing
Hi Daniel,
I've tried mounting (via fuse) the gluster volume and it will present
via samba just fine.
Jon
On 2014-07-18 07:03, Daniel Müller wrote:
Then try following: just mount your gluster vol on centos. Do not use
the vfs!!! Just point your path to the mounted glusterfs.
And try
Hi Niels,
Thanks for your response, I did actually restart the volumes after I
added the volume option.
Although, I tried again after seeing this email but still no joy.
Jon
On 2014-07-18 17:53, Niels de Vos wrote:
On Thu, Jul 17, 2014 at 03:34:34PM +0100, Jon Archer wrote:
Yes I can mount
Hi Lala,
Thanks for your response (here and on your blog), I did try removing the
valid users statement but still no luck. Although I would imagine the
valid users statement should work, otherwise how would we control
access.
Jon
On 2014-07-18 10:59, Lalatendu Mohanty wrote:
On 07/17/2014
:/gluster/bricks/share/brick1
Options Reconfigured:
server.allow-insecure: on
and this is then added as a share in samba:
[share]
comment = Gluster and CTDB based share
path = /
read only = no
guest ok = yes
valid users = jon
vfs objects = glusterfs
glusterfs:loglevel = 10
glusterfs:volume = share
Yes I can mount the gluster volume at the shell and read/write from/to
it so there is no issue with Gluster or the volume. It seems to be
between samba and gluster from what I can gather.
Cheers
Jon
On 2014-07-17 15:04, Daniel Müller wrote:
With samba 4.1.7 on centos 6.5, glusterfs
For reference, I am running CentOS 6.5 but have also tried this on
Fedora 20 with the exact same results.
Jon
On 2014-07-17 15:34, Jon Archer wrote:
Yes I can mount the gluster volume at the shell and read/write from/to
it so there is no issue with Gluster or the volume. It seems
I managed to add the brick by using the force-flag, i.e.,
gluster volume add-brick gluster s1:/mnt/raid6 force
Hopefully there are no drawbacks involved with this...
/jon
On 19/03/14 12:17, teg...@renget.se wrote:
Hi,
One of my bricks suffered from complete raid failure, (3 disks on
raid6
capriotti.car...@gmail.com
To: Jon Cope jc...@redhat.com
Cc: gluster-users@gluster.org
Sent: Tuesday, March 4, 2014 4:29:31 PM
Subject: Re: [Gluster-users] glusterd service fails to start from AWS AMI
I don't want to sound simplistic, but seems to be name resolution/network
related.
Again, I DO know
Hello all.
I have a working replica 2 cluster (4 nodes) up and running happily over Amazon
EC2. My end goal is to create AMIs of each machine and then quickly reproduce
the same, but new, cluster from those AMIs. Essentially, I'd like a cluster
template.
-Assigned original instances'
), preventing AWS from changing it
during reboot. Querying the public DNS from inside EC2 returns the private IP
addresses, while a query from outside EC2 returns the elastic IP. Gluster
seems happy with this, so I am too.
Regards,
Jon
http://alestic.com/2009/06/ec2-elastic-ip-internal
instead of rdma).
3. Just recently I upgraded to 3.4.2-1, and started a
gluster volume rebalance glusterKumiko fix-layout start (since I have a
lot of
disk layout missing and mismatching layouts in the logs).
Regards, and thanks!
/jon
___
Gluster
Hi all,
I'm attempting to create a 4 nodes cluster over EC2. I'm fairly new to this
and so may not be seeing something obvious.
- Established passworldless SSH between nodes.
- edited /etc/sysconfig/network HOSTNAME=node#.ec2 to satisfy FQDN
- mounted xfs /dev/xvdh /mnt/brick1
- stopped
Hi All,
I'm trying to configure a Gluster/Hadoop volume in a 4 node EC2 cluster using
the automated configure process in:
rhs-hadoop-install-0_65-2.el6rhs.noarch.rpm
rhs-hadoop-2.1.6-2.noarch.rpm
command ./install /dev/SomeDevice
I begin with 4 nodes, each with an attached and formatted EBS
, how to bring them in under gluster?
Regards,
/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
be a good inclusion to get a simple Gluster setup
working.
I'll be keeping a close eye on this one as it would also be great for
events.
Cheers
Jon A
On 01/11/13 20:25, John Mark Walker wrote
Greetings,
One of the best things I've seen at conferences this year has been a bookmark
!
/jon
On Jul 15, 2013 18:38 Vijay Bellur vbel...@redhat.com wrote:
Hi All,
3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS
3.4.0
can be downloaded from [1]
and release notes are available at [2]. Upgrade instructions can be
found at [3].
If you would like to propose
they are filled up to about 90%.
My guess is that it could have something to do with the fact that the
two disks in questions were populated already when I installed gluster
on them, and that those files are not accounted for?
Regards, and thanks!
/jon
Maybe my question was a bit involved, I'll try again:
while searching the web I have found various issues connected to
cluster.min-free-disk (e.g., one shouldn't use % but rather a size
number). Would it be possible with an update of the status?
Thanks,
/jon
On Jun 11, 2013
Hi, I have a system using 3.2.6-1, running ext4. I recently expanded the
system from 4 to 5 servers. I have NOT yet done re-balancing of the
system - so at the moment most of the writing of new files goes to the
new server.
My impression is that it would be possible to re balance while the
it would be advantageous
from a performance perspective - can one say something general ragarding
this?
Regards,
/jon
I answer the same question from a forum user in the original link:
http://forums.overclockers.com.au/showpost.php?p=15234327postcount=14
GlusterFS's main bottleneck is rarely local
it be an issue to have bricks using different
underlying file systems?
In both these cases I intend to use 3.2.6.
Regards,
/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
this be achieved?
Thanks,
/jon
On 09/23/2012 12:06 PM, Jon Tegner wrote:
Hi (again),
have four gluster (3.2.6) servers on which I want to downgrade the OS
(fr CentOS-6 to CentOS-5). Want to keep the file system (with
raid10/ext4 on the servers), and I contemplate the following method:
Keep all
the current installation, and after downgrading the OS (where I don't
touch the raids) and installing gluster see to that these configuration
files are identical to how they were before.
Will this bring up file system as it was before the downgrade? Or am I
missing something here?
Thanks,
/jon
, things appear to
work. Is it safe to ignore this?
Thanks again,
/jon
On Sep 17, 2012 08:05 Vijay Bellur vbel...@redhat.com wrote:
On 09/17/2012 12:39 AM, Jon Tegner wrote:
Have a volume consisting of 4 bricks. It was set up using infiniband
with
Transport-type: tcp,rdma
We have been happily running gluster for a couple of years now, however,
as of lately we have encountered issues.
Issues are rather vague, but included a lot of messages about page
allocation failure, and spontaneous reboots of ONE of the servers (we
have four).
We are using 3.2.6, on
After restarting the services, the error messages disappeared, problem
solved ;-)
/jon
On Sep 17, 2012 10:22 Jon Tegner teg...@renget.se wrote:
Thanks!
We see a lot of errors of the type
E [rdma.c:4417:tcp_connect_finish] 0-glusterStore2-client-2: tcp
connect to failed
, but I haven't been able
to figure out how to achieve this (is it just by removing rdma from
the volume.brick.mount.vol-files on the 4 bricks)?
Thanks,
/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman
Hi,
have a volume, consisting of two bricks:
Volume Name: glusterStore2
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp,rdma
Bricks:
Brick1: toki:/mnt/raid10
Brick2: yoshie:/mnt/raid10
Now, I would like to extend this volume, and I have two suitable
Hi, I have seen that there are issues with gluster on ext4. Just to be
clear, is this something which is only related to clients using nfs,
i.e., can I happily use gluster (without downgrading kernel) if all
clients are using gluster native client?
Thanks,
Hi, I'm a bit curious of error messages of the type remote operation
failed: Stale NFS file handle. All clients using the file system use
Gluster Native Client, so why should stale nfs file handle be reported?
Regards,
/jon
___
Gluster-users
Hi,
have a gluster file system running on four bricks, it seems to be
running OK (can mount it, and files are visible and can be accessed).
However, when starting glusterd on the bricks I get errors of the type:
E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown
key:
Hi,
I want to mount from two different gluster-filesystems, according to
the following lines in fstab:
server1:glusterStore1___/home1__glusterfs___defaults,_netdev,transport=r
dma___0_0
server2:/glusterStore2___/home2__glusterfs___defaults,_netdev,transport=
rdma___0_0
However, when
don't think there are any suspect symlinks.
Thanks!
/jon
On Jul 5, 2012 15:57 Harry Mangalam hjmanga...@gmail.com wrote:
Do you have some dangling symlinks?
/home - /home2 (or vice versa)
ie
ls -ld /home*
what does 'mount' or /etc/mtab say?
(assuming that the '_' are supposed
;-) doing just that (with software raid) on a
backup file system in a HPC environment.
Regards,
/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Sorry for a stupid question, but would there be issues using glusterfs
based on several 11 TB ext4-bricks?
/jon
On 09/24/2011 09:26 PM, Anand Babu Periasamy wrote:
On Sat, Sep 24, 2011 at 12:14 PM, Liam Slusser lslus...@gmail.com
mailto:lslus...@gmail.com wrote:
I have a very large
Nope, measured nothing! Had hoped someone had already done it!
Plan would be to play around with an infiniband switch and a bunch of
nodes, testing both hard drives, ssd and ram disks. Whenever I get the
time...
Regards,
/jon
On 05/27/2011 08:55 PM, Berend de Boer wrote:
Jon == Jon
On 05/27/2011 04:31 PM, Joe Landman wrote:
On 05/27/2011 07:12 AM, Jon Tegner wrote:
A general question, suppose I have a parallel application, using mpi,
where really fast access to the file system is critical.
Would it be stupid to consider a ram disk based setup? Say a 36 port QDR
Ram
Seeing a glusterfs client die oddly.
--Setup--
Client:
Fedora 12 2.6.32.16-141.fc12.x86_64
# rpm -qa |egrep 'fuse|glust'
fuse-2.8.4-1.fc12.x86_64
glusterfs-client-3.0.5-1.fc11.x86_64
fuse-libs-2.8.4-1.fc12.x86_64
glusterfs-common-3.0.5-1.fc11.x86_64
Servers - 6 nodes with a 3 x distribute:
I have a really simple glusterfs setup.
Used
glusterfs-volgen --name glusterStore --transport tcp host1:/mnt/raid10
host2:/mnt/raid10
to create the necessary files. And mounted the system with
glusterfs --volfile=/etc/glusterfs/glusterfs.vol /mnt/glusterfs/
on the clients.
Hi,
I'm only recently started playing with glusterfs. My set up consists of
two servers (noriko and kumiko), each with twelve 1Tb disk, raided
together in raid10.
The systems have CentOS-5.5 installed, and I have
installed glusterfs-3.0.4-1 (client, common and server).
I have generated
is the way to go (rather than
striped), although for
Regards,
/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On 04/25/2010 02:24 PM, Tomasz Chmielewski wrote:
Am 25.04.2010 23:05, Jon M. Skelton wrote:
I'm currently doing this. Ubuntu 10.04 (beta) using glusterfs to mirror
qcow2 KVM machine images. Works quite well. In both your crashing cases,
things look much like when VM gets 'virsh destroy'. It's
Hello,
First off, thanks again for providing gluster. Awesome project.
This is a n00bish question. I thought that gluster goes through the VFS
like any other filesystem, which is where the most of the filesystem
caching takes place. (Somewhat Simplified)
I'm seeing a major difference in
| 241.220 MB/s | 11.6 % | 543.4 % |
There's no way it's getting 241 MB/s over gigabit with Random Read. I'm
sure there's a reason for this, just curious as to what it is.
On 04/02/2010 04:29 PM, Marcus Bointon wrote:
On 2 Apr 2010, at 09:10, Jon Swanson wrote
if at all possible though.
Is there a syntax for providing an fstab line for a gluster mount that
will allow the gluster client to try multiple hosts in the event one is
down?
Thanks,
jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http
to
maintain a volume definition on every single client.
There's a big chance i'm misunderstanding something here, and
wholeheartedly welcome any corrections.
Thanks,
jon
Note: one solution is to just have multiple lines in the fstab. This
works but is very hacking and generates notifications
69 matches
Mail list logo