regards
Aravinda
On Wednesday 07 September 2016 11:24 AM, Georg Schoenberger wrote:
On 2016-09-07 07:45, Aravinda wrote:
Using Checkpoint feature you can confirm that Geo-rep synced till
that time.
Set Checkpoint
gluster volume geo-replication ::
config checkpoint now
Touch Mount point (
Hi all,
Thanks for everyone's participation and making it success.
The minutes and logs for todays meeting are available from the links below,
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-31/weekly_community_meeting_31aug2015.2016-08-31-12.01.html
Minutes
(text):https:
On 2016-09-07 07:45, Aravinda wrote:
Using Checkpoint feature you can confirm that Geo-rep synced till that time.
Set Checkpoint
gluster volume geo-replication :: config
checkpoint now
Touch Mount point (To record setattr in every bricks changelog)
mount -t glusterfs localhost:/ /mnt/
touch /mn
Using Checkpoint feature you can confirm that Geo-rep synced till that time.
Set Checkpoint
gluster volume geo-replication :: config
checkpoint now
Touch Mount point (To record setattr in every bricks changelog)
mount -t glusterfs localhost:/ /mnt/
touch /mnt/
Watch the status of Geo-replica
Correct.
On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri
wrote:
>On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
>oleksa...@natalenko.name> wrote:
>
>> Hello,
>>
>> thanks, but that is not what I want. I have no issues debugging gfapi
>apps,
>> but have an issue with Gl
Using the gluster client rather than NFS seems to fix the problem
On 09/01/2016 02:35 PM, Pat Haley wrote:
Hi Pranith,
In attached file capture.pcap
On 09/01/2016 01:01 PM, Pranith Kumar Karampuri wrote:
You need to capture the file so that we can see the tcpdump in
wireshark to inspect
On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
oleksa...@natalenko.name> wrote:
> Hello,
>
> thanks, but that is not what I want. I have no issues debugging gfapi apps,
> but have an issue with GlusterFS FUSE client not being handled properly by
> Massif tool.
>
> Valgrind+Massif does not
Created BZ for it [1].
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1373630
On вівторок, 6 вересня 2016 р. 23:32:51 EEST Pranith Kumar Karampuri wrote:
> I included you on a thread on users, let us see if he can help you out.
>
> On Mon, Aug 29, 2016 at 4:02 PM, Oleksandr Natalenko <
>
> ole
Hello,
thanks, but that is not what I want. I have no issues debugging gfapi apps,
but have an issue with GlusterFS FUSE client not being handled properly by
Massif tool.
Valgrind+Massif does not handle all forked children properly, and I believe
that happens because of some memory corruption
On 09/06/2016 01:54 AM, Kaushal M wrote:
>> Following down through the docs on that link, I find the Centos Storage
>> > SIG repo has 3.7.13, and the Storage testing repo has 3.7.15.
>> >
>> > What is a typical timeframe for releases to transition from the testing
>> > repo to the normal repo?
> R
On Tue, Sep 6, 2016 at 10:11 PM, Krutika Dhananjay
wrote:
>
>
> On Tue, Sep 6, 2016 at 7:27 PM, David Gossage > wrote:
>
>> Going to top post with solution Krutika Dhananjay came up with. His
>> steps were much less volatile and could be done with volume still being
>> actively used and also mu
Hello, Oleksandr
You can compile that simple test code posted
here(http://www.gluster.org/pipermail/gluster-users/2016-August/028183.html).
Then, run the command
$>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
--tool=massif ./glfsxmp
the cmd will produce a file like
On Tue, Sep 6, 2016 at 11:41 AM, Krutika Dhananjay
wrote:
>
>
> On Tue, Sep 6, 2016 at 7:27 PM, David Gossage > wrote:
>
>> Going to top post with solution Krutika Dhananjay came up with. His
>> steps were much less volatile and could be done with volume still being
>> actively used and also mu
On Tue, Sep 6, 2016 at 7:27 PM, David Gossage
wrote:
> Going to top post with solution Krutika Dhananjay came up with. His steps
> were much less volatile and could be done with volume still being actively
> used and also much less prone to accidental destruction.
>
> My use case and issue were
Benjamin,
There are three issues of interest:
1. Since the hot tier reached 90% of its capacity, the nature and
frequency of accesses of files are important
Aggressive file accesses will cause a flood of promotions, although
sequentially, hampering performance.
Abstaining from access f
On Tue, Sep 06, 2016 at 03:27:46PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> During last night's problems we ended up having to delete the VM's disks
> and re-creating them from scratch, then import the datas from the backups.
> I don't think so but just to be sure, is there a way to recover those d
Yes, you are correct. On a sharded volume, the hot and cold would be
based on sharded chunks.
I'm stressing the point which Krutika mentioned in her mail that we
haven't tested the use case in depth.
Regards
Rafi KC
On 09/06/2016 06:38 PM, Krutika Dhananjay wrote:
> Theoretically whatever you s
Going to top post with solution Krutika Dhananjay came up with. His steps
were much less volatile and could be done with volume still being actively
used and also much less prone to accidental destruction.
My use case and issue were desire to wipe a brick and recreate with same
directory structur
Hi,
During last night's problems we ended up having to delete the VM's disks
and re-creating them from scratch, then import the datas from the backups.
I don't think so but just to be sure, is there a way to recover those deleted
files ? We had to remove them because we don't have much space on th
On Tue, Sep 6, 2016 at 7:29 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Anybody?
>
>
While I have not tested it yet the 2 email chains I have seen from users
trying it is that the performance has been worse rather than any increased
benefit. Perhaps those using it succe
Theoretically whatever you said is correct (at least from shard's
perspective).
Adding Rafi who's worked on tiering to know if he thinks otherwise.
It must be mentioned that sharding + tiering hasn't been tested as such
till now by us at least.
Did you try it? If so, what was your experience?
-K
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "gluster-users"
> Sent: Tuesday, September 6, 2016 8:29:06 AM
> Subject: Re: [Gluster-users] Tiering and sharding for VM workload
>
>
>
> Anybody?
Paul Cruzner did some tests with sharding+tiering, I think the intent was to
Hi folks,
I am trying to switch over to my distributed geo redundant volume.
Any tools/checks/methods to ensure the geo redundant volume does not lag
behind?
Can I ensure, that the geo replicated host has all the data?
THX, Georg
___
Gluster-users ma
Anybody?
Il 05 set 2016 22:19, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
> Is tiering with sharding usefull with VM workload?
> Let's assume a storage with tiering and sharding enabled, used for
> hosting VM images.
> Each shard is subject to tiering, thus the most fr
On 09/06/2016 08:03 AM, Emmanuel Dreyfus wrote:
On Tue, Sep 06, 2016 at 07:30:08AM -0400, Kaleb S. KEITHLEY wrote:
Mac OS X doesn't build at the present time because its sed utility (used in
the xdrgen/rpcgen part of the build) doesn't support the (linux compatible)
'-r' command line option. (Ne
On Tue, Sep 06, 2016 at 07:30:08AM -0400, Kaleb S. KEITHLEY wrote:
> Mac OS X doesn't build at the present time because its sed utility (used in
> the xdrgen/rpcgen part of the build) doesn't support the (linux compatible)
> '-r' command line option. (NetBSD and FreeBSD do.)
>
> (There's an easy f
Hi Gluster team,
The weekly Gluster bug triage is about to take place in 26 min.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12
On 09/02/2016 03:49 PM, Pranith Kumar Karampuri wrote:
hi,
As per MAINTAINERS file this port doesn't have maintainer. If you
want to take up the responsibility of maintaining the port please let us
know how you want to go about doing it and what should be the checklist
of things that should
Hi,
Here is the info :
Volume Name: VMs
Type: Replicate
Volume ID: c5272382-d0c8-4aa4-aced-dd25a064e45c
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ips4adm.name:/mnt/storage/VMs
Brick2: ips5adm.name:/mnt/storage/VMs
Brick3: ips6adm.name:/mnt/storage/VMs
Options
Unfortunately we do not have option to choose Active Workers. Following
two modes are supported(Both are automatic)
Node ID:(Default Mode)
--
If a node ID present in first up sub volumes list then the respective
worker will become Active, Rest of the workers will be passive.
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply.
Thanks,
Soumya
On 09/06/2016 12:19 PM, Soumya Koduri wrote:
On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote:
hi,
Did you get a chance to decide on the nfs-ganesha integrations
tests that need to be run b
31 matches
Mail list logo