[Gluster-devel] [commercial ] Openings in Red Hat.

2014-11-05 Thread Humble Devassy Chirammal
Hi All,

The Red Hat Storage Engineering team is seeking someone with extensive
systems programming experience to join us as a Principal/Senior/Software
Engineer. In this role, you will work as part of a team responsible for
developing GlusterFS. You'll conduct system analysis, development and
review and repair legacy code, as well as provide assistance to testers
by developing unit test framework and support personnel to determine
system problems. We'll need you to have a solid technical background,
preferably in client or server architecture and algorithm development,
as well as good interpersonal skills. We love the open source community,
so if you're already contributing to any open source projects, please
upload your code samples or links along with your resume. We also
encourage you to join our storage product community.

Red Hat Storage is a scale-out NAS storage solution. The work profile
would involve developing and maintaining components of a distributed
file system called GlusterFS.
The job profile requires extensive knowledge of C, Linux System
programming, Networking, Algorithms, Analytics and Data Structures.

If you are interested, please reach out to me @ hchir...@redhat.com

--Humble
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GlusterFS 3.5.3beta2 is now available for testing

2014-11-05 Thread Niels de Vos
Many thanks to all our users that have reported bugs against the 3.5
version of GlusterFS! glusterfs-3.5.5beta2 has been made available for
testing.

Reporters of bugs are kindly requested to verify if the fixed bugs
listed in the Release Notes have indeed been fixed. When testing is
done, please update any verified/failed bugs as soon as possible. If any
assistence is needed, do not hesitate to send a request to the Gluster
Users mailinglist (gluster-us...@gluster.org) or start a discussion in
the #gluster channel on Freenode IRC.

The release notes can be found on my blog, soon it will have been synced
to blog.gluster.org too:
- http://blog.nixpanic.net/2014/11/glusterfs-353beta2-release-notes.html

Packages for different distributions can be found on the main download
server:
- http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.3beta2/

Thank you in advance for testing,
Niels


[The notes below do not have functioning links, those work in the blog]

Release Notes for GlusterFS 3.5.3

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1 and 3.5.2
contain a listing of all the new features that were added and bugs fixed
in the GlusterFS 3.5 stable release.

Bugs Fixed:

1081016: glusterd needs xfsprogs and e2fsprogs packages
1100204: brick failure detection does not work for ext4 filesystems
1126801: glusterfs logrotate config file pollutes global config
1129527: DHT :- data loss - file is missing on renaming same file from 
multiple client at same time
1129541: [DHT:REBALANCE]: Rebalance failures are seen with error message  
remote operation failed: File exists
1132391: NFS interoperability problem: stripe-xlator removes EOF at end of 
READDIR
1133949: Minor typo in afr logging
1136221: The memories are exhausted quickly when handle the message which 
has multi fragments in a single record
1136835: crash on fsync
1138922: DHT + rebalance : rebalance process crashed + data loss + few 
Directories are present on sub-volumes but not visible on mount point + lookup 
is not healing directories
1139103: DHT + Snapshot :- If snapshot is taken when Directory is created 
only on hashed sub-vol; On restoring that snapshot Directory is not listed on 
mount point and lookup on parent is not healing
1139170: DHT :- rm -rf is not removing stale link file and because of that 
unable to create file having same name as stale link file
1139245: vdsm invoked oom-killer during rebalance and Killed process 4305, 
UID 0, (glusterfs nfs process)
1140338: rebalance is not resulting in the hash layout changes being 
available to nfs client
1140348: Renaming file while rebalance is in progress causes data loss
1140549: DHT: Rebalance process crash after add-brick and `rebalance start' 
operation
1140556: Core: client crash while doing rename operations on the mount
1141558: AFR : gluster volume heal volume_name info prints some random 
characters
1141733: data loss when rebalance + renames are in progress and bricks from 
replica pairs goes down and comes back
1142052: Very high memory usage during rebalance
1142614: files with open fd's getting into split-brain when bricks goes 
offline and comes back online
1144315: core: all brick processes crash when quota is enabled
1145000: Spec %post server does not wait for the old glusterd to exit
1147156: AFR client segmentation fault in afr_priv_destroy
1147243: nfs: volume set help says the rmtab file is in 
/var/lib/glusterd/rmtab
1149857: Option transport.socket.bind-address ignored
1153626: Sizeof bug for allocation of memory in afr_lookup
1153629: AFR : excessive logging of Non blocking entrylks failed in 
glfsheal log file.
1153900: Enabling Quota on existing data won't create pgfid xattrs
1153904: self heal info logs are filled with messages reporting ENOENT 
while self-heal is going on
1155073: Excessive logging in the self-heal daemon after a replace-brick
1157661: GlusterFS allows insecure SSL modes

Known Issues:

  * The following configuration changes are necessary for 'qemu' and 'samba vfs 
plugin' integration with libgfapi to work seamlessly:

 1. gluster volume set volname server.allow-insecure on
 2. restarting the volume is necessary

 gluster volume stop volname
 gluster volume start volname

 3. Edit /etc/glusterfs/glusterd.vol to contain this line:

 option rpc-auth-allow-insecure on

 4. restarting glusterd is necessary

 service glusterd restart

More details are also documented in the Gluster Wiki on the Libgfapi 
with qemu libvirt page.

  * For Block Device translator based volumes open-behind translator at the 
client side needs to be disabled.

  * gluster volume set volname performance.open-behind disabled

  * libgfapi clients calling glfs_fini before a successfull glfs_init will 
cause the client to hang as reported here. The workaround is NOT to call 

Re: [Gluster-devel] Spurious regression of tests/basic/mgmt_v3-locks.t

2014-11-05 Thread Atin Mukherjee


On 11/03/2014 06:15 PM, Justin Clift wrote:
 On Sun, 02 Nov 2014 21:41:02 +0530
 Atin Mukherjee amukh...@redhat.com wrote:
 On 10/31/2014 07:08 PM, Justin Clift wrote:
 On Fri, 31 Oct 2014 10:17:28 +0530
 Atin Mukherjee amukh...@redhat.com wrote:
 snip
 Justin,

 For last three runs, I've observed the same failure. I think its
 really the time to debug this without any further delay. Can you
 please share a rackspace machine such that I can debug this issue?

 Yep, this is very doable.  It looks like Xavi already has this one
 under control though.
 Xavi has a fix for the ec related failures, mgmt_v3 lock failure is
 still unknown which I will be looking at. I would request you to lend
 me a rackspace VM for the same.
 
 Ahhh, sorry.  My misunderstanding.  slave26.cloud.gluster.org is set
 aside now for you to do stuff on.  The login details are the same as
 last time.  If you need me to resend them, just let me know. :)
 
Here are my findings looking into a regression failure log [1]

mgmt_v3 lock test forms a cluster of 3 nodes say N1, N2  N3. Now after
couple of remove-brick operations, peer status command reached to N1 and
after that glusterd stopped responding and I don't see any more logs in
mgmt_v3-locks.t_glusterd1.log. This could happen if glusterd is stuck
while performing a transaction or the glusterd process is down.

When I looked at the log file of other node N2 I could see that N1 is
disconnected which proves that glusterd went down in N1 (it couldn't be
network failure otherwise other commands should have been processed by
N1 itself). The interesting fact here is there is no core captured.

Can there be any cases where glusterd instance may go down unexpectedly
without a crash?

[1] http://build.gluster.org/job/rackspace-regression-2GB-triggered
/2319/consoleFull

~Atin
 + Justin
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Please vote for Dockit for #dockerhackday on Twitter

2014-11-05 Thread Humble Devassy Chirammal
Thanks Justin for the support!!

If  some one is not seeing  buttons for FB like/g+1/tweet on hackathon page
, please disable pop up blockers or open in a different browser :)

--Humble


On Wed, Nov 5, 2014 at 1:44 AM, Justin Clift jus...@gluster.org wrote:

 Hi all,

 Our very own Humble Chirammal's Dockit project is an entry in the 2nd
 Docker Global Hack Day.  Please vote for it on Twitter:

   My #dockerhackday vote is for dockit by Humble Chirammal and
@hiSaifi. @docker

 :)

 Regards and best wishes,

 Justin Clift

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Installing GlusterFS 3.4.x, 3.5.x or 3.6.0 on RHEL or CentOS 6.6

2014-11-05 Thread Niels de Vos

This is a post that will also appear on http://blog.gluster.org and on
http://planet.fedoraproject.org. A description like this is planned to
get included on the Gluster wiki.

Comments and further ideas are very welcome, please share them by
replying to this email.

Thanks,
Niels


With the release of RHEL-6.6 and CentOS-6.6, there are now glusterfs packages
in the standard channels/repositories. Unfortunately, these are only the
client-side packages (like glusterfs-fuse and glusterfs-api). Users that want
to run a Gluster Server on a current RHEL or CentOS now have difficulties
installing any of todays current version of the Gluster Community packages.

The most prominent issue is that the glusterfs package from RHEL has a version
of 3.6.0.28, and that is higher than the last week released version of 3.6.0.
RHEL is shipping a pre-release that was created while the Gluster Community was
still developing 3.6. An unfortunate packaging decision added a .28 to the
version, where most other pre-releases would fall-back to a (rpm-)version like
3.6.0-0.1.something.bla.el6. The difference might look minor, but the result is
a major disruption in the much anticipated 3.6 community release [1].

For the immediate need to fix this in a most easy way for our community users,
we have decided to release version 3.6.1 later this week (maybe on Thursday
November 6). This version is higher than the version in RHEL/CentOS, and
therefore yum will prefer the package from the community repository over the
one available in RHEL/CentOS. This is also the main reason why there have been
no 3.6.0 packages provided on the download server.

Installing an older stable release (like 3.4 or 3.5) on RHEL/CentOS 6.6
requires a different approach. At the moment we can offer two solutions that
can be used. We are still working on making this easier, until that is
finalized, some manual actions are required.

Lets assume you want to verify if todays announced glusterfs-3.5.3beta2 [2]
packages indeed fix that bug you reported. (These steps apply to the other
versions as well, this just happens to be what I have been testing.)


Option A: use exclude in the yum repository files for RHEL/CentOS

 1. download the glusterfs-353beta2-epel.repo [3] file and save it under
/etc/yum.repos.d/

 2. edit /etc/yum.repos.d/redhat.repo or /etc/yum.repos.d/CentOS-Base.repo and
under each repository that you find, add the following line

exclude=glusterfs*

This prevents yum from installing the glusterfs* packages from the standard
RHEL/CentOS repositories, but allows those packages from others. The Red Hat
Customer Portal has an article about this configuration [4] too.


Option B: install and configure yum-plugin-priorities

Using yum-plugin-priorities is probably a more stable solution. This does not
require changes to the standard RHEL/CentOS repositories. However, an
additional package needs to get installed.

 1. enable the optional repository when on RHEL, CentOS users can skip this step

# subscription-manager repos --list | grep optional-rpms
# subscription-manager repos --enable=*optional-rpms

 2. install the yum-plugin-priorities package:

# yum install yum-plugin-priorities

 3. download the glusterfs-353beta2-epel.repo [3] file and save it under
/etc/yum.repos.d/

 4. edit the /etc/yum.repos.d/glusterfs-353beta2-epel.repo file and add the
following option to each repository definition:

priority=50

The default priority for repositories is 99. The repositories with the lowest
number have the highest priority. As long as the RHEL/CentOS repositories do
not have the priority option set, the packages from the
glusterfs-353beta2-epel.repo will get preferred by yum.

When using the yum-plugin-priorities approach, we highly recommend that you
check if all your repositories have a suitable (or missing) priority option. In
case some repositories have the option set, but yum-plugin-priorities was not
installed yet, the order of the repositories might have changed. Because of
this, we do not want to force using yum-plugin-priorities on all the Gluster
Community users that run on RHEL/CentOS.

In case users still have issues installing the Gluster Community packages on
RHEL or CentOS, we recommend getting in touch with us on the Gluster Users
mailinglist [5] (archive [6]) or in the #gluster IRC channel on Freenode [7].


[1] http://blog.gluster.org/2014/10/glusterfs-3-6-0-is-alive/
[2] 
http://blog.gluster.org/2014/11/glusterfs-3-5-3beta2-is-now-available-for-testing/
[3] 
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.3beta2/RHEL/glusterfs-353beta2-epel.repo
[4] https://access.redhat.com/solutions/881973
[5] gluster-us...@gluster.org
[6] http://supercolony.gluster.org/pipermail/gluster-users/
[7] irc://irc.freenode.net/gluster



pgp53W1NZRqcq.pgp
Description: PGP signature
___
Gluster-devel mailing list

[Gluster-devel] GlusterD-2.0 - thread safety

2014-11-05 Thread Justin Clift
Forwarding this, as it's possibly very relevant for the
GlusterD-2.0 stuff.  It looks like the threading behaviour of Go
is very different to other languages (somewhat practical outline
given below).

(Start with the Normally, in any mainstream language ... bit)

It's probably worth being aware of sooner rather than later, and
thinking through potential ramifications before we decide to go
down that route. ;)

Regards and best wishes,

Justin Clift



Begin forwarded message:
Date: Thu, 6 Nov 2014 01:10:45 +0100
From: nicolas riesch nicolas.rie...@gmail.com
To: General Discussion of SQLite Database sqlite-us...@sqlite.org
Subject: Re: [sqlite] Is sqlite thread-safety sufficient for use with
Go language ?


Pardon me, I will try to reformulate my question more clearly.

My scenario:

  - sqlite is set to Multi-thread mode (SQLITE_THREADSAFE=2), or
Serialized mode (SQLITE_THREADSAFE=1)
  - I create N logical threads in my Go program.
  - Each logical thread creates a database connection, for its
exclusive usage.
Logical thread LT1 creates connection C1, logical thread LT2 creates
connection C2, etc.
Logical thread LT1 only makes call to connection C1, never to
connection C2, C3, etc. Same for other threads.

Normally, in any mainstream language (C, PHP, etc), the same OS thread
makes the successive calls to sqlite3_prepare(), sqlite3_step(),
sqlite3_column(), sqlite3_finalize(), etc.
In the loop to retrieve all records in a table, there is no reason to
call sqlite3_step() on a different OS thread each time.

But in Go, it is possible that each call to sqlite3_step() is scheduled
to run on a different OS thread.
Indeed, the execution of a logical Go thread (called a Goroutine) can
switch from one OS thread to another one, without the user being aware
of it, at each function call.

E.g. logical thread LT1 can dispatch function calls on connection C1
like this:
OS thread a --sqlite3_prepare(C1)--
--sqlite3_column(C1)--
OS thread b
--sqlite3_step(C1)--
--sqlite3_column(C1)--
OS thread
c
--sqlite3_step(C1)--  --sqlite3_finalize(C1)--

For each connection, function calls always occur sequentially, but
possibly on a different OS thread each time.

Logical thread LT2 executes simultaneously, but calling functions only
on connection C2.
Logical thread LT3 executes simultaneously, but calling functions only
on connection C3.
etc...

So, in this scenario, I imagine that with SQLITE_THREADSAFE=1 or
SQLITE_THREADSAFE=2, there should be no problem ?

Is it correct to say that each function of the C API doesn't care on
which OS thread it is run, as long as the sequence of calls is correct ?

I know that in www.sqlite.org/threadsafe.html, it is written that In
serialized mode, SQLite can be safely used by multiple threads with no
restriction., but I just wanted to have a confirmation that it really
applies in the particular scenario above.


2014-11-05 23:13 GMT+01:00 Simon Slavin slav...@bigfraud.org:


 On 5 Nov 2014, at 10:05pm, nicolas riesch nicolas.rie...@gmail.com
 wrote:

  Even if the user writes a Go program with only one logical thread,
  he has no control about which OS thread will process a function
  call.
 
  This means that EACH SUCCESSIVE function in the sequence above
  can be processed on a DIFFERENT OS THREAD.
 
  It means that to run safely, sqlite source code should not depend
  in any way on the identity of the threads, which must be fully
  interchangeable. So, the following conditions should be true. Are
  these sentences correct
 ?
 
  1) no local-thread-storage is used in sqlite code.
  2) thread id (gettid()) are not used.
  3) when a function of the API enters a mutex, it leaves it before
  the function returns.
Between two API function calls, no mutex should be locked (else,
  it would be impossible to ensure that the mutex is unlocked by the
  same
 thread
  that locked it).
  4) all file locking information is attached to connections, and not
  to threads.

 Since you don't already refer to it, can I ask that you read this page

 https://www.sqlite.org/threadsafe.html

 and then ask any questions which remain, plus any new ones ?  You
 should probably tell us which threading mode you intend to use based
 on the needs you outline above.

 Simon.

___
sqlite-users mailing list
sqlite-us...@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterD-2.0 - thread safety

2014-11-05 Thread Harshavardhana
On Wed, Nov 5, 2014 at 6:25 PM, Justin Clift jus...@gluster.org wrote:
 Forwarding this, as it's possibly very relevant for the
 GlusterD-2.0 stuff.  It looks like the threading behaviour of Go
 is very different to other languages (somewhat practical outline
 given below).

 (Start with the Normally, in any mainstream language ... bit)

 It's probably worth being aware of sooner rather than later, and
 thinking through potential ramifications before we decide to go
 down that route. ;)

 Regards and best wishes,

 Justin Clift
 


You didn't post the reply to that thread -
http://permalink.gmane.org/gmane.comp.db.sqlite.general/91532

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel