Re: [Gluster-users] Bitrot: Time of signing depending on the file size???

2019-02-28 Thread Amudhan P
Hi David,

I have also tested the bitrot signature process by default it takes < 250
KB/s.

regards
Amudhan P


On Fri, Mar 1, 2019 at 1:19 PM David Spisla  wrote:

> Hello folks,
>
> I did some observations concerning the bitrot daemon. It seems to be that
> the bitrot signer is signing files depending on file size. I copied files
> with different sizes into a volume and I was wonderung because the files
> get their signature not the same time (I keep the expiry time default with
> 120). Here are some examples:
>
> 300 KB file ~2-3 m
> 70 MB file ~ 40 m
> 115 MB file ~ 1 Sh
> 800 MB file ~ 4,5 h
>
> What is the expected behaviour here?
> Why does it take so long to sign a 800MB file?
> What about 500GB or 1TB?
> Is there a way to speed up the sign process?
>
> My ambition is to understand this observation
>
> Regards
> David Spisla
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Bitrot: Time of signing depending on the file size???

2019-02-28 Thread David Spisla
Hello folks,

I did some observations concerning the bitrot daemon. It seems to be that
the bitrot signer is signing files depending on file size. I copied files
with different sizes into a volume and I was wonderung because the files
get their signature not the same time (I keep the expiry time default with
120). Here are some examples:

300 KB file ~2-3 m
70 MB file ~ 40 m
115 MB file ~ 1 Sh
800 MB file ~ 4,5 h

What is the expected behaviour here?
Why does it take so long to sign a 800MB file?
What about 500GB or 1TB?
Is there a way to speed up the sign process?

My ambition is to understand this observation

Regards
David Spisla
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-28 Thread Jim Kinney
Orig file structure to share with gluster is /foo
Volname is testvol

Data exists in foo . You have 2 copies, one on machine a, another on b.

When you create the testvol in gluster, it creates a folder /foo/.glusterfs and 
writes all gluster metadata there. There's config data written in gluster only 
space like var. 

When users write files into gluster volumes, gluster manages the writes to the 
actual filesystem in /foo on both a & b. It tracks writes in .glusterfs on both 
a & b.

If you "un gluster", the user files in /foo on a & b are untouched. The 
/foo/.glusterfs folder is deleted on a 

On February 28, 2019 10:14:05 AM EST, Tami Greene  wrote:
>I'm missing some information about how the cluster volume creates the
>metadata allowing it to see and find the data on the bricks.  I've been
>told not to write anything to the bricks directly as the glusterfs
>cannot
>create the metadata and therefore the data doesn't exist in the cluster
>world.
>
>So, if I destroy the current gluster volume, leaving the data on the
>hardware RAID volume, correct the names of the new empty bricks,
>recreate
>the cluster volume, import bricks, how does the metadata get created so
>the
>new cluster volume can find and access the data?  It seems like I would
>be
>laying the glusterfs on top on hardware and "hiding" the data.
>
>
>
>On Wed, Feb 27, 2019 at 5:08 PM Jim Kinney 
>wrote:
>
>> It sounds like new bricks were added and they mounted over the top of
>> existing bricks.
>>
>> gluster volume status  detail
>>
>> This will give the data you need to find where the real files are.
>You can
>> look in those to see the data should be intact.
>>
>> Stopping the gluster volume is a good first step. Then as a safe
>guard you
>> can unmount the filesystem that holds the data you want. Now remove
>the
>> gluster volume(s) that are the problem - all if needed. Remount the
>real
>> filesystem(s). Create new gluster volumes with correct names.
>>
>> On Wed, 2019-02-27 at 16:56 -0500, Tami Greene wrote:
>>
>> That makes sense.  System is made of four data arrays with a hardware
>RAID
>> 6 and then the distributed volume on top.  I honestly don't know how
>that
>> works, but the previous administrator said we had redundancy.  I'm
>hoping
>> there is a way to bypass the safeguard of migrating data when
>removing a
>> brick from the volume, which in my beginner's mind, would be a
>> straight-forward way of remedying the problem.  Hopefully once the
>empty
>> bricks are removed, the "missing" data will be visible again in the
>volume.
>>
>> On Wed, Feb 27, 2019 at 3:59 PM Jim Kinney 
>wrote:
>>
>> Keep in mind that gluster is a metadata process. It doesn't really
>touch
>> the actual volume files. The exception is the .glusterfs and
>.trashcan
>> folders in the very top directory of the gluster volume.
>>
>> When you create a gluster volume from brick, it doesn't format the
>> filesystem. It uses what's already there.
>>
>> So if you remove a volume and all it's bricks, you've not deleted
>data.
>>
>> That said, if you are using anything but replicated bricks, which is
>what
>> I use exclusively for my needs, then reassembling them into a new
>volume
>> with correct name might be tricky. By listing the bricks in the exact
>same
>> order as they were listed when creating the wrong name volume when
>making
>> the correct named volume, it should use the same method to put data
>on the
>> drives as previously and not scramble anything.
>>
>> On Wed, 2019-02-27 at 14:24 -0500, Tami Greene wrote:
>>
>> I sent this and realized I hadn't registered.  My apologies for the
>> duplication
>>
>> Subject: Added bricks with wrong name and now need to remove them
>without
>> destroying volume.
>> To: 
>>
>>
>>
>> Yes, I broke it. Now I need help fixing it.
>>
>>
>>
>> I have an existing Gluster Volume, spread over 16 bricks and 4
>servers;
>> 1.5P space with 49% currently used .  Added an additional 4 bricks
>and
>> server as we expect large influx of data in the next 4 to 6 months. 
>The
>> system had been established by my predecessor, who is no longer here.
>>
>>
>>
>> First solo addition of bricks to gluster.
>>
>>
>>
>> Everything went smoothly until “gluster volume add-brick Volume
>> newserver:/bricks/dataX/vol.name"
>>
>> (I don’t have the exact response as I worked on this
>for
>> almost 5 hours last night) Unable to add-brick as “it is already
>mounted”
>> or something to that affect.
>>
>> Double checked my instructions, the name of the
>bricks.
>> Everything seemed correct.  Tried to add again adding “force.” 
>Again,
>> “unable to add-brick”
>>
>> Because of the keyword (in my mind) “mounted” in the
>> error, I checked /etc/fstab, where the name of the mount point is
>simply
>> /bricks/dataX.
>>
>> This convention was the same across all servers, so I thought I had
>> discovered an error in my notes and changed the name to
>> newserver:/bricks/dataX.
>>
>> Still had to use force, but the 

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-28 Thread Poornima Gurusiddaiah
On Thu, Feb 28, 2019, 8:44 PM Tami Greene  wrote:

> I'm missing some information about how the cluster volume creates the
> metadata allowing it to see and find the data on the bricks.  I've been
> told not to write anything to the bricks directly as the glusterfs cannot
> create the metadata and therefore the data doesn't exist in the cluster
> world.
>
> So, if I destroy the current gluster volume, leaving the data on the
> hardware RAID volume, correct the names of the new empty bricks, recreate
> the cluster volume, import bricks, how does the metadata get created so the
> new cluster volume can find and access the data?  It seems like I would be
> laying the glusterfs on top on hardware and "hiding" the data.
>

I couldn't get all the details why it went wrong, but you can delete a
Gluster volume and recreate it with the same bricks and the data should be
accessible again AFAIK. Preferably create with the same volume name. Do not
alter any data on the bricks, also make sure there is no valid data on the
4 bricks that were wrongly added by checking in the backend.

+Atin, Sanju

@Atin, Sanju, This should work right?

Regards,
Poornima


>
> On Wed, Feb 27, 2019 at 5:08 PM Jim Kinney  wrote:
>
>> It sounds like new bricks were added and they mounted over the top of
>> existing bricks.
>>
>> gluster volume status  detail
>>
>> This will give the data you need to find where the real files are. You
>> can look in those to see the data should be intact.
>>
>> Stopping the gluster volume is a good first step. Then as a safe guard
>> you can unmount the filesystem that holds the data you want. Now remove the
>> gluster volume(s) that are the problem - all if needed. Remount the real
>> filesystem(s). Create new gluster volumes with correct names.
>>
>> On Wed, 2019-02-27 at 16:56 -0500, Tami Greene wrote:
>>
>> That makes sense.  System is made of four data arrays with a hardware
>> RAID 6 and then the distributed volume on top.  I honestly don't know how
>> that works, but the previous administrator said we had redundancy.  I'm
>> hoping there is a way to bypass the safeguard of migrating data when
>> removing a brick from the volume, which in my beginner's mind, would be a
>> straight-forward way of remedying the problem.  Hopefully once the empty
>> bricks are removed, the "missing" data will be visible again in the volume.
>>
>> On Wed, Feb 27, 2019 at 3:59 PM Jim Kinney  wrote:
>>
>> Keep in mind that gluster is a metadata process. It doesn't really touch
>> the actual volume files. The exception is the .glusterfs and .trashcan
>> folders in the very top directory of the gluster volume.
>>
>> When you create a gluster volume from brick, it doesn't format the
>> filesystem. It uses what's already there.
>>
>> So if you remove a volume and all it's bricks, you've not deleted data.
>>
>> That said, if you are using anything but replicated bricks, which is what
>> I use exclusively for my needs, then reassembling them into a new volume
>> with correct name might be tricky. By listing the bricks in the exact same
>> order as they were listed when creating the wrong name volume when making
>> the correct named volume, it should use the same method to put data on the
>> drives as previously and not scramble anything.
>>
>> On Wed, 2019-02-27 at 14:24 -0500, Tami Greene wrote:
>>
>> I sent this and realized I hadn't registered.  My apologies for the
>> duplication
>>
>> Subject: Added bricks with wrong name and now need to remove them without
>> destroying volume.
>> To: 
>>
>>
>>
>> Yes, I broke it. Now I need help fixing it.
>>
>>
>>
>> I have an existing Gluster Volume, spread over 16 bricks and 4 servers;
>> 1.5P space with 49% currently used .  Added an additional 4 bricks and
>> server as we expect large influx of data in the next 4 to 6 months.  The
>> system had been established by my predecessor, who is no longer here.
>>
>>
>>
>> First solo addition of bricks to gluster.
>>
>>
>>
>> Everything went smoothly until “gluster volume add-brick Volume
>> newserver:/bricks/dataX/vol.name"
>>
>> (I don’t have the exact response as I worked on this for
>> almost 5 hours last night) Unable to add-brick as “it is already mounted”
>> or something to that affect.
>>
>> Double checked my instructions, the name of the bricks.
>> Everything seemed correct.  Tried to add again adding “force.”  Again,
>> “unable to add-brick”
>>
>> Because of the keyword (in my mind) “mounted” in the
>> error, I checked /etc/fstab, where the name of the mount point is simply
>> /bricks/dataX.
>>
>> This convention was the same across all servers, so I thought I had
>> discovered an error in my notes and changed the name to
>> newserver:/bricks/dataX.
>>
>> Still had to use force, but the bricks were added.
>>
>> Restarted the gluster volume vol.name. No errors.
>>
>> Rebooted; but /vol.name did not mount on reboot as the /etc/fstab
>> instructs. So I attempted to mount manually and 

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-28 Thread Tami Greene
I'm missing some information about how the cluster volume creates the
metadata allowing it to see and find the data on the bricks.  I've been
told not to write anything to the bricks directly as the glusterfs cannot
create the metadata and therefore the data doesn't exist in the cluster
world.

So, if I destroy the current gluster volume, leaving the data on the
hardware RAID volume, correct the names of the new empty bricks, recreate
the cluster volume, import bricks, how does the metadata get created so the
new cluster volume can find and access the data?  It seems like I would be
laying the glusterfs on top on hardware and "hiding" the data.



On Wed, Feb 27, 2019 at 5:08 PM Jim Kinney  wrote:

> It sounds like new bricks were added and they mounted over the top of
> existing bricks.
>
> gluster volume status  detail
>
> This will give the data you need to find where the real files are. You can
> look in those to see the data should be intact.
>
> Stopping the gluster volume is a good first step. Then as a safe guard you
> can unmount the filesystem that holds the data you want. Now remove the
> gluster volume(s) that are the problem - all if needed. Remount the real
> filesystem(s). Create new gluster volumes with correct names.
>
> On Wed, 2019-02-27 at 16:56 -0500, Tami Greene wrote:
>
> That makes sense.  System is made of four data arrays with a hardware RAID
> 6 and then the distributed volume on top.  I honestly don't know how that
> works, but the previous administrator said we had redundancy.  I'm hoping
> there is a way to bypass the safeguard of migrating data when removing a
> brick from the volume, which in my beginner's mind, would be a
> straight-forward way of remedying the problem.  Hopefully once the empty
> bricks are removed, the "missing" data will be visible again in the volume.
>
> On Wed, Feb 27, 2019 at 3:59 PM Jim Kinney  wrote:
>
> Keep in mind that gluster is a metadata process. It doesn't really touch
> the actual volume files. The exception is the .glusterfs and .trashcan
> folders in the very top directory of the gluster volume.
>
> When you create a gluster volume from brick, it doesn't format the
> filesystem. It uses what's already there.
>
> So if you remove a volume and all it's bricks, you've not deleted data.
>
> That said, if you are using anything but replicated bricks, which is what
> I use exclusively for my needs, then reassembling them into a new volume
> with correct name might be tricky. By listing the bricks in the exact same
> order as they were listed when creating the wrong name volume when making
> the correct named volume, it should use the same method to put data on the
> drives as previously and not scramble anything.
>
> On Wed, 2019-02-27 at 14:24 -0500, Tami Greene wrote:
>
> I sent this and realized I hadn't registered.  My apologies for the
> duplication
>
> Subject: Added bricks with wrong name and now need to remove them without
> destroying volume.
> To: 
>
>
>
> Yes, I broke it. Now I need help fixing it.
>
>
>
> I have an existing Gluster Volume, spread over 16 bricks and 4 servers;
> 1.5P space with 49% currently used .  Added an additional 4 bricks and
> server as we expect large influx of data in the next 4 to 6 months.  The
> system had been established by my predecessor, who is no longer here.
>
>
>
> First solo addition of bricks to gluster.
>
>
>
> Everything went smoothly until “gluster volume add-brick Volume
> newserver:/bricks/dataX/vol.name"
>
> (I don’t have the exact response as I worked on this for
> almost 5 hours last night) Unable to add-brick as “it is already mounted”
> or something to that affect.
>
> Double checked my instructions, the name of the bricks.
> Everything seemed correct.  Tried to add again adding “force.”  Again,
> “unable to add-brick”
>
> Because of the keyword (in my mind) “mounted” in the
> error, I checked /etc/fstab, where the name of the mount point is simply
> /bricks/dataX.
>
> This convention was the same across all servers, so I thought I had
> discovered an error in my notes and changed the name to
> newserver:/bricks/dataX.
>
> Still had to use force, but the bricks were added.
>
> Restarted the gluster volume vol.name. No errors.
>
> Rebooted; but /vol.name did not mount on reboot as the /etc/fstab
> instructs. So I attempted to mount manually and discovered a had a big mess
> on my hands.
>
> “Transport endpoint not connected” in
> addition to other messages.
>
> Discovered an issue between certificates and the
> auth.ssl-allow list because of the hostname of new server.  I made
> correction and /vol.name mounted.
>
> However, df -h indicated the 4 new bricks were not being
> seen as 400T were missing from what should have been available.
>
>
>
> Thankfully, I could add something to vol.name on one machine and see it
> on another machine and I wrongly assumed the volume was 

[Gluster-users] Gluster off PyPy

2019-02-28 Thread lejeczek
hi everyone

I'm hoping devel might be reading this, but if not - anybody tried
glusterfs off PyPy?

If yes and it works then what was/is the experience?

many thanks, L.



pEpkey.asc
Description: application/pgp-keys
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users