. There are lots of small files also. So
how critical this rebalance function is? What will happen, if i will stop
it and do not complete at all? what is the point of rebalnce? I don't mind,
if second storage server will be used only when first will be full.
--
Best regards,
Roman.
Community
images on different
replicated volumes. I mean I'm not sure if it is wrong or right, just
curious of this.
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On Wed, May 9, 2018 at 3:09 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com> wrote:
> On 05/09/2018 09:02 AM, Roman Serbski wrote:
>> On Wed, May 9, 2018 at 1:22 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com>
>> wrote:
>>>
>>> That's certainly tru
On Wed, May 9, 2018 at 1:22 PM, Kaleb S. KEITHLEY wrote:
>
> That's certainly true. The issue hasn't been fixed in any version of
> gluster yet.
>
> You can help moving it along by voting +1 on
> https://review.gluster.org/19974
Will do -- thanks!
> Based on that gdb bt you
On Tue, May 8, 2018 at 11:34 AM, Roman Serbski <mefysto...@gmail.com> wrote:
> # gdb gluster
> GNU gdb 6.1.1 [FreeBSD]
> Copyright 2004 Free Software Foundation, Inc.
> GDB is free software, covered by the GNU General Public License, and you are
> welcome to change it and
On Mon, May 7, 2018 at 9:19 PM, Kaleb S. KEITHLEY wrote:
>
> See https://review.gluster.org/19974
Many thanks Kaleb.
Your patch did the trick and I did manage to compile, however I get a
Segmentation fault when trying to execute gluster.
I'm using the following options to
Hello,
Has anyone managed to successfully compile the latest 3.13.2 under
FreeBSD 11.1? ./autogen.sh and ./configure seem to work but make
fails:
Making all in src
CC glfs.lo
cc: warning: argument unused during compilation: '-rdynamic'
[-Wunused-command-line-argument]
cc: warning:
Hello,
Currently runnig for about a year glusterfs 3.6.5 on my glusterfs solution.
Would like to upgrade it to some more recent version.
Which one is it safe to upgrade to? 3.10 or 3.12? And am I able to upgrade
just with 1 jump or I have to do it in some other way?
--
Best regards,
Roman
to the second server also?
How the data flow will go in this scenario?
Client-gluster1-gluster2
or
client-gluster2 ?
I hope I do not have to setup samba share on the second server. In case I
have to, what would it look like for windows clients?
--
Best regards,
Roman
on
volumes? I will not use tiering with ssd. The network is going to be
10Gbps. Any advice on this topic is highly appreciated. Will start with 1
server and 50TB disks on HW raid10 and add servers as data goes larger.
--
Best regards,
Roman.
___
Gluster-users
Hello,
Anyone please?
2016-10-03 15:42 GMT+03:00 Roman <rome...@gmail.com>:
> Hello, dear community!
> It was pretty much time for me not writing here, but it only means, that
> everything was just fine with our gluster storage for KVM VMs.
>
> We are running 3.6.5 @
KVM hosts
2. stop glusterd on both our storages
3. upgrade both storages to jessie via apt-get dist-upgrade
4. update 3.6.5 to 3.6.9
5. start everything.
Should this work or I might facing some problems?
--
Best regards,
Roman.
___
Gluster-users
016-04-06 16:36 GMT+03:00 Ira Cooper <i...@redhat.com>:
> >
> > > Roman <rome...@gmail.com> writes:
> > >
> > > > Hi,
> > > >
> > > > does anyone know, where do I
> > > > get /usr/lib/x86_64-linux-gnu/samba/vfs/gl
2016-04-06 16:36 GMT+03:00 Ira Cooper <i...@redhat.com>:
> Roman <rome...@gmail.com> writes:
>
> > Hi,
> >
> > does anyone know, where do I
> > get /usr/lib/x86_64-linux-gnu/samba/vfs/glusterfs.so or where do I get
> it's
> > source to comp
Hi,
does anyone know, where do I
get /usr/lib/x86_64-linux-gnu/samba/vfs/glusterfs.so or where do I get it's
source to compile it for debian jessie?
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http
No ideas? It means I should keep to my first plan?
Raid6 and single volume?
2016-03-29 11:06 GMT+03:00 Roman <rome...@gmail.com>:
> According to this:
> http://www.gluster.org/pipermail/gluster-users/2014-November/019443.html
> it is not that easly possible.
>
> 2016-03-29
According to this:
http://www.gluster.org/pipermail/gluster-users/2014-November/019443.html it
is not that easly possible.
2016-03-29 0:58 GMT+03:00 Roman <rome...@gmail.com>:
> and another pretty important thing - will I be able to grow this volume by
> simpli adding few bricks
and another pretty important thing - will I be able to grow this volume by
simpli adding few bricks more? Or how is it going to go with expansion?
2016-03-28 14:49 GMT+03:00 Roman <rome...@gmail.com>:
> have anyone had any disaster recovery actions on such setup?
> For how long i
it
:)
2016-03-28 14:21 GMT+03:00 Roman <rome...@gmail.com>:
> Hi Joe,
>
> thanks for an answer. but in the case of 37 8TB bricks the data won't be
> available if one of servers fails anyway :) And it seems to me, that it
> would be even bigger mess to undarstand, what files are u
of space either way. Make 37 8TB bricks
> and use disperse.
>
>
> On March 28, 2016 10:33:52 AM GMT+02:00, Roman <rome...@gmail.com> wrote:
>>
>> Hi,
>>
>> Thanks for an option, but it seems that it is not that good in our
>> situation. I can't waste s
f a whole
> server goes down. If you are using local RAID for redundancy and that
> server goes offline you’ll be missing files.
>
>
>
> On Mar 27, 2016, at 6:29 PM, Roman <rome...@gmail.com> wrote:
>
> Hi,
>
> Need an advice from heavy glusterfs users and may
easier :)
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
t this fix and the sharding ones too
>> right away.
>>
>
>
> is 3.7.7 far off?
>
> --
> Lindsay Mathieson
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
>
at 01:39:52PM +0200, Roman wrote:
> > Hi
> >
> > Seems like I've missed the mail about 3.6.8 release and notes... Could
> > someone point me to the release notes e-mail?
> >
> > PS
> > Why wouldn't devs put release notes or changelog file to repository wi
Hello!
Did someone notice the year changing thing? Happy New 2016 Year, dear
community and devs!
Stay rocking, feel healthy (considering both glusterfs and members) and be
happy.
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users
meout for error detection of the
> disks inside the VMs, and/or decrease the network.ping-timeout.
>
> It would be interesting to know if adapting these values prevents the
> read-only occurrences in your environment. If you do any testing with
> this, please keep me informed about the results.
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
from the LAN
> interfaces that carry the canonical host name.
>
> Make sure as well that file system attributes or configuration files
> aren't changed during the upgrade to a point that prevents a safe
> downgrade.
>
> Mauro
>
> On Thu, October 15, 2015 04:14, Atin Mukherjee
e can’t have live VMs? And, what if we want to
> run
> > sql database in the VM?
> >
> >
> >
> > Regards,
> >
> > Nashid
> >
> >
> > ___
> > Gluster-users mailing list
> > Gl
, 2015 at 04:06:50PM +0300, Roman wrote:
Hi all,
I'm back and tested those things.
Michael was right. I've enabled read-ahead option and nothing changed.
So the thing causes the problem with libgfapi and d8 virtio drivers
is performance.write-behind. If it is off, everything works perfect
(which were confirmed by other
user with same configuration).
2015-07-20 1:53 GMT+03:00 Roman rome...@gmail.com:
Thanks for your reply.
I will test these options as soon as I'm back from ma vacation (2 weeks
from now on). I'll be too far from servers to change something even on
testing volume
to point out that read-ahead shouldn't *break* anything for
you. At the same time, if you enable it, making no other changes, and it
*does* break things, that's something people would want to know about.)
On Sat, Jul 18, 2015 at 2:42 PM Roman rome...@gmail.com wrote:
Hi!
Thanks for reply
Roman rome...@gmail.com wrote:
solved after I've added (thanks to Niels de Vos) these options to the
volumes:
performance.read-ahead: off
performance.write-behind: off
2015-07-15 17:23 GMT+03:00 Roman rome...@gmail.com:
hey,
I've updated the bug, if some1 has some ideas
solved after I've added (thanks to Niels de Vos) these options to the
volumes:
performance.read-ahead: off
performance.write-behind: off
2015-07-15 17:23 GMT+03:00 Roman rome...@gmail.com:
hey,
I've updated the bug, if some1 has some ideas - share plz.
https://bugzilla.redhat.com
But when I'm inside of VM, I'm getting 50-53 MB/s for replica and 120 MB
for distributed volumes...
whats the logic? that slow write speed directly to mounted gfs volume slows
down backup and restore process of VMs by VE.
2015-07-18 22:47 GMT+03:00 Roman rome...@gmail.com:
Hi,
Looked a lot
changes
based on it.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-07-18 22:11 GMT+02:00 Roman rome...@gmail.com:
But when I'm inside of VM, I'm getting 50-53 MB/s for replica and 120 MB
for distributed volumes...
whats the logic? that slow write speed directly to mounted gfs volume
that if you re-enabled read-ahead, you won't see the problem. Just
leave write-behind off.
On Sat, Jul 18, 2015, 10:44 AM Roman rome...@gmail.com wrote:
solved after I've added (thanks to Niels de Vos) these options to the
volumes:
performance.read-ahead: off
performance.write-behind: off
14, 2015 at 7:27 PM, Roman rome...@gmail.com wrote:
I've done this way: installed debian8 on local disks using netinstall
iso,
created a template of it and then cloned (full clone) it to glusterfs
storage backend. VM boots and runs fine... untill I start to install
something massive (DE ie
here is one of the errors example. its like files that debian installer
copies to the virtual disk that is located on glusterfs storage are getting
corrupted.
in-target is /dev/vda1
2015-07-14 11:50 GMT+03:00 Roman rome...@gmail.com:
Ubuntu 14.04 LTS base install and then mate install were
Ubuntu 14.04 LTS base install and then mate install were fine!
2015-07-14 2:35 GMT+03:00 Roman rome...@gmail.com:
Bah... the randomness of this issue is killing me.
Not only HA volumes are affected. Got an error during installation of d8
with mate (on python-gtk2 pkg) on Distributed volume
everything is good on the network side of things?
MTU/Loss/Errors?
Is your inconsistency linked to one specific brick? Have you tried running
a replica instead of distributed?
--
Scott H.
Login, LLC.
Roman rome...@gmail.com
July 14, 2015 at 6:38 AM
here is one of the errors example. its
problem.
To solve split-brain this time, I've restored VM from backup.
2015-07-14 21:55 GMT+03:00 Roman rome...@gmail.com:
Thanx for pointing out...
but doesn't seem to work... or i am too sleepy due to problems with
glusterfs and debian8 in other topic which i'm fighting for month..
root
'.
I've got both
option rpc-auth-allow-insecure on
in vol file and
server.allow-insecure: on
per volume.
HELP please It WAS production :*(
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http
glusterfs 3.6.4
any ideas?
I'm ready to help to investigate this bug.
When sun will shine, I'll try to install latest Ubuntu also. But now I'm
going to sleep.
2015-07-14 0:54 GMT+03:00 Roman rome...@gmail.com:
So, just to conclude this:
upgraded to 3.6.4
problem still exists and is only with D8
Other way - some Dev should instruct me, what logs one needs and how do I
get them. I do not want to switch to ceph the whole production setup, but
this bug is just annoying.
2015-07-14 0:54 GMT+03:00 Roman rome...@gmail.com:
So, just to conclude this:
upgraded to 3.6.4
problem still exists
So, just to conclude this:
upgraded to 3.6.4
problem still exists and is only with D8 and gluster HA volumes. Someone
from devs really should test it.
2015-07-13 19:59 GMT+03:00 Roman rome...@gmail.com:
Hi,
I've reported a lot about this, but every time there was something that
made me
Hi,
Why do you guys ignore this? If you do not want to setup proxmox with
gluster, let me help you to debug this..
I'm so far now, that can say, its only up to replicated volumes...
2015-06-17 12:51 GMT+03:00 Roman rome...@gmail.com:
Hi,
guys, have you tried to play with this? product can't
:08 GMT+03:00 Roman rome...@gmail.com:
Hi,
Why do you guys ignore this? If you do not want to setup proxmox with
gluster, let me help you to debug this..
I'm so far now, that can say, its only up to replicated volumes...
2015-06-17 12:51 GMT+03:00 Roman rome...@gmail.com:
Hi,
guys, have
Hi,
guys, have you tried to play with this? product can't be named stable, if
significant part of it (HA) does not run properly ...
2015-06-15 9:50 GMT+03:00 Roman rome...@gmail.com:
any ideas?
2015-06-12 0:53 GMT+03:00 Roman rome...@gmail.com:
Ah so many information to share and I forgot
any ideas?
2015-06-12 0:53 GMT+03:00 Roman rome...@gmail.com:
Ah so many information to share and I forgot one more thing:
if I create the VM on Distributed volume and then convert it to template
and clone it to HA volume, things seem to be working fine also.
So there is something wrong
to upgrade to one of them from 3.5.3 ?
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Bricks:
Brick1: stor1:/exports/S14A4F-D/2TB
Brick2: stor2:/exports/S14A4F-D/2TB
Options Reconfigured:
network.ping-timeout: 3
server.allow-insecure: on
If I install on local storage everything works fine also.
--
Best regards,
Roman.
___
Gluster-users
of my proxmox hosts - no losses and they are
connected over 1gbps network. Storage network for gluster uses separate
network with 1 gbps cards also.
2015-06-12 0:38 GMT+03:00 Roman rome...@gmail.com:
Hi,
The debian 8.1 is released, but I've got still problems installing it as
qemu-kvm guest
Hi
Will there be version for wheezy also?
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
problems.
installing deb7 on the same setup (same VM even, just another virtual
driver) is ok.
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Actually its pretty random. Now got installed without any problems using
3.5.3 .
don't know what to do. and its only with Debian 8.
2015-05-15 18:53 GMT+03:00 Roman rome...@gmail.com:
Hey,
Seems like the problem exists.
I've got 4 instances of proxmox. I've started to update glusterfs some
... we are running this
on in production.
2015-05-15 16:18 GMT+03:00 Roman rome...@gmail.com:
Hi,
Are there any issues with debian8 and glusterfs?
Got this setup:
1. promox 3.4
2. gluster 3.5.3 both server and client (I use gluster repo on proxmox
server)
3. HA gluster volume
What I'm
for your idea.
2015-05-15 19:35 GMT+03:00 Joe Julian j...@julianfamily.org:
It sounds like a network problem to me. Verify with iperf between your
client and each server.
On 05/15/2015 09:01 AM, Roman wrote:
Actually its pretty random. Now got installed without any problems using
3.5.3
options values of a volume?
Thanks in advance.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
--
Best regards,
Roman.
___
Gluster-users
@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Well, I meant stripe volume :) As file size should exceed brick's size.
And questions remain the same. Hope for a quick response.
2015-04-01 11:38 GMT+03:00 Roman rome...@gmail.com:
Hi devs, list!
I've got somewhat simple but in same time pretty difficult question. But
I'm running glusterf
failure
due to network or power)
yes, we have backup server and ups and generator, as we are running DC, but
I'm just curious if we will have to restore the data from backups or it
will be available after brick comes back up?
--
Best regards,
Roman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman
Thanks for reply!
2015-03-30 6:40 GMT+03:00 Kaushal M kshlms...@gmail.com:
Hi Roman,
The steps you've given should work without any problems. And the same
steps will apply for a replicated volume as well.
~kaushal
On Sat, Mar 28, 2015 at 2:38 PM, Roman rome...@gmail.com wrote:
anyone
Anyone?
2015-03-13 10:42 GMT+02:00 Roman rome...@gmail.com:
Hi,
Running glusterfs 3.5.3 built on Nov 17 2014 15:48:54 on two servers and
one client. And all have same issue: they are loging to the wrong place.
They should log to:
/var/log/glusterfs/cli.log
/var/log/glusterfs/nfs.log
anyone?
2015-03-13 15:48 GMT+02:00 Roman rome...@gmail.com:
Hi,
This is the question to the whole community and for devs too. So feel free
to answer, if you have some experience with this.
I've found a lot of information about expanding volumes by adding another
volume on another server
rotate 7
delaycompress
compress
notifempty
missingok
postrotate
[ ! -f /var/run/glusterd.pid ] || kill -HUP `cat
/var/run/glusterd.pid`
endscript
}
--
Best regards,
Roman.
___
Gluster
distributed volume again. Will it work? Works the same
way with replicated volume?
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
] 0-HA-WIN-TT-1T-replicate-0:
Another crawl is in progress for HA-WIN-TT-1T-client-0
same on stor2
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
oh, never mind. it is synced now. took a LOT of time :)
2014-11-06 13:12 GMT+02:00 Roman rome...@gmail.com:
Hi,
another stupid/interesting situation:
root@stor1:~# gluster volume heal HA-WIN-TT-1T info
Brick stor1:/exports/NFS-WIN/1T/
/disk - Possibly undergoing heal
Number of entries
, metadata - Pending matrix:
[ [ 0 0 ] [ 3 0 ] ], on gfid:c7b624e4-1c66-4de7-aa89-95460ff098aa
ehm, just found this. what could it mean? any way to gfid to file
transformation?
2014-11-06 14:40 GMT+02:00 Roman rome...@gmail.com:
oh, never mind. it is synced now. took a LOT of time :)
2014-11-06
Pranith Kumar Karampuri pkara...@redhat.com:
On 10/14/2014 01:20 AM, Roman wrote:
ok. done.
this time there were no disconnects, at least all of vms are working, but
got some mails from VM about IO writes again.
WARNINGs: Read IO Wait time is 1.45 (outside range [0:1]).
This warning says
, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman
of
900GB), healing process goes soo slow :)
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
:/exports/NFS-WIN/1T
Options Reconfigured:
nfs.disable: 1
network.ping-timeout: 10
2014-10-13 19:09 GMT+03:00 Pranith Kumar Karampuri pkara...@redhat.com:
Could you give your 'gluster volume info' output?
Pranith
On 10/13/2014 09:36 PM, Roman wrote:
Hi,
I've got this kind of setup
Sure.
I'll let it to run for this night .
2014-10-13 19:19 GMT+03:00 Pranith Kumar Karampuri pkara...@redhat.com:
hi Roman,
Do you think we can run this test again? this time, could you enable
'gluster volume profile volname start', do the same test. Provide output
of 'gluster volume
stor1:HA-WIN-TT-1T 1008G 901G 57G
95% /srv/nfs/HA-WIN-TT-1T
no file, but size is still 901G.
Both servers show the same.
Do I really have to restart the volume to fix that?
2014-10-13 19:30 GMT+03:00 Roman rome...@gmail.com:
Sure.
I'll let it to run
Still the same way: just created a large empty file:
dd if=/dev/zero of=disk bs=1G count=900 iflag=fullblock
2014-10-13 19:49 GMT+03:00 Pranith Kumar Karampuri pkara...@redhat.com:
On 10/13/2014 10:03 PM, Roman wrote:
hmm,
seems like another strange issue? Seen this before. Had to restart
oh sorry, and then deleted it, of course :)
2014-10-13 19:53 GMT+03:00 Roman rome...@gmail.com:
Still the same way: just created a large empty file:
dd if=/dev/zero of=disk bs=1G count=900 iflag=fullblock
2014-10-13 19:49 GMT+03:00 Pranith Kumar Karampuri pkara...@redhat.com:
On 10/13
,max_read=131072
0 0
2014-10-13 19:56 GMT+03:00 Joe Julian j...@julianfamily.org:
Looks like you're mounting NFS? That would be the FSCache in the client.
On 10/13/2014 09:33 AM, Roman wrote:
hmm,
seems like another strange issue? Seen this before. Had to restart the
volume to get my
So may I restart the volume and start the test, or you need something else
from this issue?
2014-10-13 19:49 GMT+03:00 Pranith Kumar Karampuri pkara...@redhat.com:
On 10/13/2014 10:03 PM, Roman wrote:
hmm,
seems like another strange issue? Seen this before. Had to restart the
volume to get
i think i may know what was an issue. There was an iscsitarget service
runing, that was exporting this generated block device. so maybe my
collegue Windows server picked it up and mountd :) I'll if it will happen
again.
2014-10-13 20:27 GMT+03:00 Roman rome...@gmail.com:
So may I restart
1313096.68 us 125.00 us 23281862.00 us189
FSYNC
92.18 397.92 us 76.00 us 1838343.00 us7372799
WRITE
Duration: 7811 seconds
Data Read: 0 bytes
Data Written: 966367641600 bytes
does it make something more clear?
2014-10-13 20:40 GMT+03:00 Roman rome
/listinfo/gluster-users
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
are exploring raid controllers with onboard SSD cache which may help.
On Tue, Sep 23, 2014 at 7:59 AM, Roman rome...@gmail.com wrote:
Hi,
just a question ...
Would SAS disks be better in situation with lots of seek times using
GlusterFS?
2014-09-22 23:03 GMT+03:00 Jeff Darcy jda
Hi
This morning we had a VM crash and these logs (see attc)
No logs on gluster servers nor virtual host where this is glusterfs mount
is attached.
Any ideas?
--
Best regards,
Roman.
Sep 16 06:36:58 munin kernel: [1025760.825197] jbd2/vda1-8 D
88003fc93780 0 179 2
No news on this?
2014-09-03 10:05 GMT+03:00 Roman rome...@gmail.com:
well, yeah. it continues to write to the log.1 file after rotation
root@stor1:~# ls -lo /proc/9162/fd/
total 0
lr-x-- 1 root 64 Sep 2 11:23 0 - /dev/null
l-wx-- 1 root 64 Sep 2 11:23 1 - /dev/null
lrwx-- 1
/mailman/listinfo/gluster-users
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
:
Wait until the heal queue is empty, ie. gluster volume heal $volname info
On September 3, 2014 7:28:06 AM PDT, Roman rome...@gmail.com wrote:
Thanks!
So it is pretty safe to upgrade.
And by the way, if we speak about updgrades... is it safe to upgrade on
servers? I mean, does restarting
May be I have to add the HUP command
for /var/lib/glusterd/glustershd/run/glustershd.pid in the logrotate.d
config file?
ATM it has only this
postrotate
[ ! -f /var/run/glusterd.pid ] || kill -HUP `cat
/var/run/glusterd.pid`
2014-09-03 10:05 GMT+03:00 Roman rome
regards,
Jaden Liang
9/2/2014
--
Best regards,
Jaden Liang
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Best regards,
Roman
-- 1 root 64 Sep 2 11:23 6 - socket:[1473128]
lrwx-- 1 root 64 Sep 2 11:23 7 - socket:[1472592]
l-wx-- 1 root 64 Sep 2 11:23 8 - /var/log/glusterfs/.cmd_log_history
lrwx-- 1 root 64 Sep 2 11:23 9 - socket:[1419368]
root@stor1:~#
2014-09-02 18:01 GMT+03:00 Roman rome...@gmail.com
mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Hi
I see new packages available in distro. Before I upgrade, where can one
read changelog? Why new and what have been done?
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman
. KEITHLEY kkeit...@redhat.com:
Roman wrote:
I see new packages available in distro. Before I upgrade, where can
one read changelog? Why new and what have been done?
I'm not an apt/dpkg expert but it seems like you shoudl be able to do an
`apt-listchanges ...` on one of the .deb files to see its
Thanks. Thought so. Last time had a small issue after upgrade.
2014-09-03 17:45 GMT+03:00 Joe Julian j...@julianfamily.org:
Wait until the heal queue is empty, ie. gluster volume heal $volname info
On September 3, 2014 7:28:06 AM PDT, Roman rome...@gmail.com wrote:
Thanks!
So
Sure It does!
thanks.
I was looking for them at glusterfs homepage :)
2014-09-04 8:22 GMT+03:00 Justin Clift jus...@gluster.org:
On 03/09/2014, at 2:38 PM, Roman wrote:
snip
I see new packages available in distro. Before I upgrade, where can one
read changelog? Why new and what have been
@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
--
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
, and it doesn't have anywhere to log that I
know of.
That being said, glusterfs has always recovered nicely whenever I have
lost and recovered a server, but the healing seems to need an hour or so
based on cpu and network usage graphs
On 9/1/2014 9:26 AM, Roman wrote:
Hmm, I don't know how
etc?
Pranith
On 08/28/2014 11:27 PM, Roman wrote:
Here are the results.
1. still have problem with logs rotation. logs are being written to .log.1
file, not .log file. any hints, how to fix?
2. healing logs are now much more better, I can see the successful message.
3. both volumes with HD
1 - 100 of 129 matches
Mail list logo