to Slave
node.
# mount | grep zdata
zdata on /zdata type zfs (rw,nosuid,noexec,noatime,xattr,noacl)
zdata/cicd on /zdata/cicd type zfs (rw,nosuid,noexec,noatime,xattr,posixacl)
Then I mounted glusterfs locally and local changes started to go to Slave node
too.
BR, Alexey
From: Kotresh
5
Could someone help me with debug of this geo-rep setup?
Thank you!
BR, Alexey
В данной электронной почте и во всех ее вложениях может содержаться информация
конфиденциального характера, предназначенная исключительно только для адресата.
Если Вы по ошибк
I have two clusters with dispersed volumes (2+1) with GEO replication
It works fine till I use glusterfs-fuse, but as even one file written over
nfs-ganesha replication goes to Fault and recovers after I remove this file
(sometimes after stop/start)
I think nfs-hanesha writes file in some way
Hi,
Could you help me?
i have a problem with file on disperse volume. When i try to read this from
mount point i recieve error,
# md5sum /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2
md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error
Configuration and status of volume is:
Hi, community
Please, help me with my trouble.
I have 2 Gluster nodes, with 2 bricks on each.
Configuration:
Node1 brick1 replicated on Node0 brick0
Node0 brick1 replicated on Node1 brick0
Volume Name: gm0
Type: Distributed-Replicate
Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf
Status:
Hi, community
I have a large distributed-replicated Glusterfs volume, that contains
few hundreds VM's images. Between servers 20Gb/sec link.
When I start some operations like healing or removing, storage
performance becomes too low for a few days and server load becomes like
this:
13:06:32
Hello everybody,
Please, help to fix me a problem.
I have a distributed-replicated volume between two servers. On each
server I have 2 RAID-10 arrays, that replicated between servers.
Brick gl1:/mnt/brick1/gm0 49153 0 Y
13910
Brick gl0:/mnt/brick0/gm0
undergoing heal
Best regards,
Alexey
2014-12-28 5:51 GMT+03:00 Pranith Kumar Karampuri pkara...@redhat.com:
On 12/25/2014 08:05 PM, Alexey wrote:
Hi all,
We are using glusterfs setup with a quorum turned on and the
configuration as the follows:
Nodes: 3
Type: Replicate
Number of Bricks: 1 x
Alexey Panevin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
: 3.5.3
Despite on the quorum is turned on sometimes we are still encounter a
split-brain occurrence after shutting down one node or all nodes together/
Is this is a normal behavior? What conditions could lead to this and how to
prevent split-brain occurence?
BR,
Alexey
Hello
I have 3 servers and 1 server (client)
3 servers having /data folder with size 300Gb
Can you tell me the best combination to create volume?
gluster volume create my_volume replica 3 transport tcp node1:/data node2:/data
node3:/data
is this ok for best performance ?
Thx
Hello
I create volume with next command:
gluster volume create opennebula replica 3 transport tcp node1:/data
node2:/data node3:/data
# gluster volume info
Volume Name: opennebula
Type: Replicate
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node1:/data
Brick2:
seems issue was in
auth.allow: 176.126.164.0/22
hmmm... client and nodes was from this subnet.. any way .. I
recreated volume and now everything looks good
thx
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg -
Yeah. sorry :)
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg
___
Gluster-users mailing list
Hello, again
Smth wrong with my install gluster
OS:Debian
cat /etc/debian_version 7.6
Package: glusterfs-server
Versions:
3.2.7-3+deb7u1
Description: I have 3 servers with bricks (192.168.1.1 -
node1,192.168.1.2 - node2, 192.168.1.3 - node3)
volume create by:
gluster volume create
Yes, Roman is correct. Also, if you have lots of random IO you're better
off with many smaller SAS drives. This is because the greater number of
spindles you have the greater your random IO is. This is also why we went
with ssd drives because sas drives weren't cutting it on the random io
Hi Ryan,
I think if you could provide more info on the storage systems it would
help. Things like total drives per raid set and size of each drive. This
is a complicated question, but a simple Googling brings up this
interesting article:
Changelog?
On Mon, Jun 16, 2014 at 9:24 PM, Kaleb S. KEITHLEY kkeit...@redhat.com
wrote:
RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora (19, 20, 21/rawhide), are
now available in YUM repos at
http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST
There are also RPMs available for
And, I found it myself:
https://github.com/gluster/glusterfs/blob/release-3.4/doc/release-notes/3.4.4.md
On Mon, Jun 16, 2014 at 11:13 PM, Alexey Zilber alexeyzil...@gmail.com
wrote:
Changelog?
On Mon, Jun 16, 2014 at 9:24 PM, Kaleb S. KEITHLEY kkeit...@redhat.com
wrote:
RPMs for el5
Hi All,
I'm having a horrid time getting gluster to create a volume. Initially,
I messed up a path and had the error mentioned here:
http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
I fixed it, restarted gluster on both nodes, and then I just get a
found.. state: 3
-Alex
On Sat, Jun 14, 2014 at 6:32 PM, Vijay Bellur vbel...@redhat.com wrote:
On 06/14/2014 11:38 AM, Alexey Zilber wrote:
Hi All,
I'm having a horrid time getting gluster to create a volume.
Initially, I messed up a path and had the error mentioned here:
http
if do not want use rsync, you may use lsyncd - i'm using lsyncd to
sync in realtime two nodes
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg
here the result of iozone
Record Size 4 KB
File size set to 4 KB
Command line used: ./iozone -l 2 -u 2 -r 4k -s 4k /storage/
Output is in Kbytes/sec
Time Resolution = 0.01 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache
here is another results:
250 writers and readers
./iozone -l 250 -u 250 -r 4k -s 4k /storage/
Iozone: Performance Test of File I/O
Version $Revision: 3.420 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don
I do not think that isssue with network
I have just tested mount -t nfs gluster:/volume /mnt
and read speed very high on small files:
for example:
Write
10+0 records in
10+0 records out
20480 bytes (20 kB) copied, 0.0357166 s, 573 kB/s
read
10+0 records in
10+0 records out
20480 bytes (20 kB)
Hello, sorry for delays with anwers
I have 1Gb network based on Intel(R) PRO/1000 Pci-E cards
I'm waiting Intel(R) PRO/1000 2 Gb Ports cards PCI-e..
I suggest trying larger block sizes and higher I/O thread count
Should I make some tweaks on GlusterNodes (peer) and client ? Can you
give me a
root@ispcp:~# mount -vv -t nfs -overs=3 192.168.15.165:/storage /storage
mount.nfs: Unknown error 521
root@ispcp:~#
root@ispcp:~# mount.nfs -v -overs=3 192.168.15.165:/storage /storage
mount.nfs: Unknown error 521
root@ispcp:~#
please note that :
apt-cache showpkg nfs-client
Package: nfs-client
sorry it was my mistake
/storage was already mounted by glusterfs
Thank you for your help
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg
Hello
How do I know with what peer (brick) is client working now ?
Thank you
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg
Thank you, :)
---
Старший Системный Администратор
Алексей Шалин
ОсОО Хостер kg - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg
___
Gluster-users mailing list
Hello, guys
I wrote small script :
#!/bin/bash
for i in {1..1000}; do
size=$((RANDOM%5+1))
dd if=/dev/zero of=/storage/test/bigfile${i} count=1024 bs=${size}k
done
This script creates files with different size on volume
here is output:
2097152 bytes (2.1 MB) copied, 0.120632 s, 17.4 MB/s
of example same client wrote file to nfs share
root@ispcp:/mnt# dd if=/dev/zero of=./bigfile${i} count=1024 bs=10k
1024+0 records in
1024+0 records out
10485760 bytes (10 MB) copied, 0.133489 s, 78.6 MB/s
much faster :(
cat /etc/mtab
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
root@ispcp:~# mount -t nfs 192.168.15.165:/storage /storage
mount.nfs: Unknown error 521
root@ispcp:~#
[2013-08-17 04:09:46.444600] E [nfs3.c:306:__nfs3_get_volume_id]
(--/usr/lib/x86_64-linux-gnu/glusterfs/3.4.0/xlator/nfs/server.so(nfs3_getattr+0x18c)
[0x7f33126acf5c]
Hello
and again.. after some time .. rsync stop working
in storage.log I see a lot of message like:
[2013-08-16 02:56:08.025068] I
[afr-self-heal-common.c:1213:afr_sh_missing_entry_call_impunge_recreate]
0-storage-replicate-0: no missing files -
34 matches
Mail list logo