- Original Message -
> Hi Atin,
> This looks interesting. Currently in Heketi it needs to ssh into a system
> to send commands to glusterfs. It would be great to determine the
> interfaces needed and how it would work with programs like Heketi. Do you
> guys have a simple few
> > interact with other applications. Also, normally on REST calls, JSON or
> > XML is returned. Is there a reason to go and use binary interfaces? Do
> > you see a need to use protobuf?
>
> The protobuf-based RPC calls are meant for communication between Gluster
> processes
> like
- Original Message -
> hi,
> Afr needs context based defaults for quorum where by default
> quorum value is 'none' for 2-way replica and 'auto' for 3 way replica.
> Anuradha sent http://review.gluster.org/11872 to fix the same. May be we
> can come up with more generic solution.
> > Here are a few things that are not clear to me.
> >
> > 1) Does the context-based default value for an option comes into effect
> > only when .value in vme table is NULL?
> If there is context based default then the static default value should
> be NULL is my feeling.
> >
> > 2) IIUC, the
- Original Message -
>
>
> On 09/01/2015 12:05 PM, Krishnan Parthasarathi wrote:
> >>> Here are a few things that are not clear to me.
> >>>
> >>> 1) Does the context-based default value for an option comes into effec
All,
I have been exploring different ways of addressing GlusterFS messaging layer
recently. My motivations for this are to find a library/framework that provides
messaging capabilities over the network with the following characteristics,
- Better expressibility - provide useful abstractions
# GlusterD 2.0 plan (Aug-Oct '15)
[This text in this email structured using markdown format]
This document outlines efforts planned towards [thousand-node-glusterd][1] in
the coming 2-3 months. The following email thread on gluster-devel provides
context on previous discussions around
- Original Message -
Did you retirgger the jobs? The jobs should no longer fail release-3.6. A
couple of my changes passed yesterday after retriggering.
Retriggering failed with a different problem. The test failed to fetch the
changeset from
gerrit. Is this known?
See
The following console output complains of missing tools,
snip
The following required tools are missing:
* dbench
* mock
* nfs-utils
* yajl
* xfsprogs
Please install them and try again.
snip
This was observed on nbslave74.cloud.gluster.org.
For more information, see
http://review.gluster.org/#/c/11241/ fails compare-bug jenkins test[1]
complaining, No BUG id, but topic 'bug-1236272' does not match 'rfc'.
But the commit message has a BUG id same as in topic. Am I missing something?
[1] -
This approach could still surprise the storage-admin when glusterfs(d)
processes
bind to ports in the range where brick ports are being assigned. We should
make this
predictable by reserving brick ports setting
net.ipv4.ip_local_reserved_ports.
Initially reserve 50 ports starting at
It seems this is exactly whats happening.
I have a question, I get the following data from netstat and grep
tcp0 0 f6be17c0fbf5:1023 f6be17c0fbf5:24007
ESTABLISHED 31516/glusterfsd
tcp0 0 f6be17c0fbf5:49152 f6be17c0fbf5:490
ESTABLISHED
- Original Message -
This is caused because when bind-insecure is turned on (which is the default
now), it may happen
that brick is not able to bind to port assigned by Glusterd for example
49192-49195...
It seems to occur because the rpc_clnt connections are binding to ports in
Yes, we could take synctask size as an argument for synctask_create.
The increase in synctask threads is not really a problem, it can't
grow more than 16 (SYNCENV_PROC_MAX).
- Original Message -
On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote:
- Original Message
We do have a way to tackle this situation from the code. Raghavendra
Talur will be sending a patch shortly.
We should fix it by undoing what daemon-refactoring did, that broke the lazy
creation
of uuid for a node. Fixing it elsewhere is just masking the real cause.
Meanwhile 'rm' is the stop
Niels,
snip
Test 22: 81: EXPECT '4' count_lines $M0/rmtab
not ok 22 Got 2 instead of 4
snip
The above failure was observed with http://review.gluster.com/10445.
For more details -
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11321/consoleFull
How should I proceed with
- Original Message -
hi,
Does anyone know why glusterfs hangs with valgrind?
When do you observe the hang? I started a single brick volume,
enabled valgrind on bricks and mounted it via fuse. I didn't
observe the mount hang. Could you share the set of steps which
lead to the hang?
Are we merging patches without a successful NetBSD regression run?
- Original Message -
Just triggered a run [1]. It should complete in about an hour. I'll
merge once complete.
[1]
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/7191/console
On Mon, Jun 22,
I would expect the snapshot-scheduler feature to be maintained by the
proposed snapshot maintainer for now.
IIUC volume-snapshot (glusterd mainly), USS and snapshot-scheduler (python
based)
all fall into snapshot maintenance.
I am also not a major fan of a single component being managed
Hi,
Can you please merge the following patches:
http://review.gluster.org/#/c/11087/
Avra,
I think you should maintain the snapshot scheduler feature
and shouldn't depend on me as a glusterd maintainer, for
merging changes. I am not really maintaining snapshot scheduler
in any sense of
Initially ctx had a one to one mapping with process and volume/mount, but
with libgfapi
and libgfchangelog, ctx has lost the one to one association with process.
The question is do we want to retain one to one mapping between ctx and
process or
ctx and volume.
Yes and no. The problem is
http://review.gluster.org/#/c/11042/
http://review.gluster.org/#/c/11100/
Merged.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
The test mentioned in $Subj has failed in [1]
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/10343/consoleFull
This was fixed in master by Jeff - http://review.gluster.org/11037. I have
backported
it to release-3.7 - http://review.gluster.org/11145. We need to merge
This patch has passed regression test on Linux and has gotten
CodeReview+2. The NetBSD regression isn't automatically run
on this patch. Is there a way to (re)trigger NetBSD regression
on this patch?
___
Gluster-devel mailing list
Re-triggered. Here is what you can do in future:
1. Find out the ref_spec number of patch from the download at the top
right corner of gerrit interface, for eg : refs/changes/29/11129/1.
Please note you would need to copy the ref number for the latest patch set.
2. Log in into
Looking at it from a different perspective...
As I understand it, the purpose of glusterfs_ctx is to be a container
for these resources. Therefore, the problem is not that the resources
aren't shared within a context but that the contexts aren't shared
among glfs objects. This happens
Has anyone see this test fail for them? This tests passes on
my laptop which leads me to believe that the test case or the underlying
issue could be non-deterministic.
Pranith,
Could you tell me what I could look for to analysie this further?
___
- Original Message -
- Original Message -
This seems to happen because of race between STACK_RESET and stack
statedump. Still thinking how to fix it without taking locks around
writing to file.
Why should we still keep the stack being reset as part of
- Original Message -
This seems to happen because of race between STACK_RESET and stack
statedump. Still thinking how to fix it without taking locks around
writing to file.
Why should we still keep the stack being reset as part of pending pool of
frames? Even we if we had to
This seems to happen because of race between STACK_RESET and stack
statedump. Still thinking how to fix it without taking locks around
writing to file.
Why should we still keep the stack being reset as part of pending pool of
frames? Even we if we had to (can't guess why?), when we remove we
KP/Kaushal,
Can we take in [2] in release-3.7 branch?
NetBSD regression hasn't run on this. I will merge this
once NetBSD passes.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
I've put together a document which I hope captures the most recent
discussions I've had, particularly those in Barcelona. Commenting should be
open to anyone, so please feel free to weigh in before too much code is
written. ;)
Thanks Jeff. This document summarises the discussion we had in
While looking into some of the races that seem possible in the new
auth-cache and netgroup/export features for Gluster/NFS, I noticed that
we do not really have a common way to do reference counting. I could
really use that, so I've put something together and would like to
request some
All,
I will be pushing GlusterFS 3.7.1 release in couple of hours. This release
is to fix BZ 1223215 - gluster volume status fails with locking failed error
message.
If there are other fixes that cause users unacceptable inconvenience, please
get your patches merged by respective maintainers in
Thank you Emmanuel.
- Original Message -
Krishnan Parthasarathi kpart...@redhat.com wrote:
Yes. 3.7.1 aims to alleviate the volume-status issue as top priority.
We couldn't get the above patch within reasonable time. I will be
backporting it and ensure it makes it to 3.7.2
be merged and needs our attention, add them back to the etherpad at the END!
Don't forget to add one of us as a reviewer, it helps with some sanity checks
I could perform on this data set ;) Feel free to ask if anything is not
clear.
Please add a 'keyword' by which you would like to indicate
So make sure to keep us informed of the changes you want to get merged.
Please use https://public.pad.fsfe.org/p/3.7.1-glusterd-patches to add patches
that you would like merged for 3.7.1. Be mindful and add your entry at the end.
We are going to go through them from the top.
Add Permalink URL
- Original Message -
I'm currently running the test in a loop on slave0. I've not had any
failures yet.
I'm running on commit d1ff9dead (glusterd: Fix conf-generation to
stop new peers participating in a transaction, while the transaction
is in progress.) , Avra's fix which was
- Original Message -
So make sure to keep us informed of the changes you want to get merged.
Please use https://public.pad.fsfe.org/p/3.7.1-glusterd-patches to add
patches
that you would like merged for 3.7.1. Be mindful and add your entry at the
end.
We are going to go through
I will encourage to do it before this patch gets into the codebase.
The duplicate peerinfo object issue is different from the problems
in the current generation number scheme. I don't see why this patch
needs to wait. We may need to keep volume-snapshot-clone.t disabled
until the time the
- Original Message -
Pranith/Xavi,
./tests/basic/ec/ec.t is part of
https://public.pad.fsfe.org/p/gluster-spurious-failures
but is not part of is_bad_test() in run-tests.sh. This has repeatedly
failed an other independent spurious regression failure fix,
All,
The following are the regression test failures that are being looked
at by Atin, Avra, Kaushal and self.
1) ./tests/bugs/glusterd/bug-974007.t to be fixed by
http://review.gluster.org/10872
2) ./tests/basic/volume-snapshot-clone. to be fixed (partially) by
Pranith/Xavi,
./tests/basic/ec/ec.t is part of
https://public.pad.fsfe.org/p/gluster-spurious-failures
but is not part of is_bad_test() in run-tests.sh. This has repeatedly
failed an other independent spurious regression failure fix,
http://review.gluster.com/10872
(for
http://review.gluster.org/10872 needs review(s) from folks who
understand DHT/rebalance process lifecycle.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
The regressions have passed on http://review.gluster.org/#/c/10871/
Patch merged.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
/20/2015 12:07 PM, Krishnan Parthasarathi wrote:
No concerns.
- Original Message -
Given that the fix which is tested by this patch is no longer
present, I
think we should remove this patch from the test-suite itself. Could
anyone confirm if there are any concerns in doing so
Are the following tests in any spurious regression failure lists? or,
noticed by others?
1) ./tests/basic/ec/ec-5-1.t
Run:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/9363/consoleFull
2) ./tests/basic/mount-nfs-auth.t
Run:
Yes and we should add them to is_bad_test() in run-tests.sh too.
I have added these tests in the etherpad. See http://review.gluster.org/10888
for adding it to run-tests.sh.
~kp
___
Gluster-devel mailing list
Gluster-devel@gluster.org
Recently with glusterfs-3.7 beta1 rpms, while create VM Image using qemu-img,
I see the following errors :
[2015-05-08 09:04:14.358896] E [rpc-transport.c:512:rpc_transport_unref] (--
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f51f6bb6516] (--
Oh nice, I might have missed the mails. Do you mind sharing the plan for
4.0? Any reason why you guys do not want to continue glusterd as
translator model?
I don't understand why we are using the translator model in the first place.
I guess it was to reuse rpc code. You should be able to shed
Why not break glusterd into small parts and distribute the load to
different people? Did you guys plan anything for 4.0 for breaking glusterd?
It is going to be a maintenance hell if we don't break it sooner.
Good idea. We have thought about it. Just re-architecting glusterd doesn't
(and will
There are no sleeps between polls in the event poll thread, ref:
event_dispatch_poll(...).
I am not sure if we are referring to the same 'poll'. I haven't gotten a chance
to look into
this. I will try adding logs when I get back to this.
~kp
- Original Message -
Krishnan Parthasarathi
- Original Message -
On 05/07/2015 02:41 PM, Krishnan Parthasarathi wrote:
Pranith,
The above snippet says that the volume has to be stopped before deleted. It
also says that
volume-stop failed. I would look into glusterd logs to see why volume-stop
failed,
cmd
Your welcome! Without the link this mail thread would remain incomplete in the
archives forever :-)
- Original Message -
On 05/07/2015 03:02 PM, Krishnan Parthasarathi wrote:
Could you provide the link to the etherpad containing the release notes
draft?
Thanks, missed it here
Could you provide the link to the etherpad containing the release notes draft?
- Original Message -
Hi All,
Humble, Raghavendra Bhat and me have worked on putting together a draft
release notes for 3.7.0 at [1].
The release notes draft does need help with known issues and minor
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Most of the new heavy hitters in regression failures seem to be code
problems in quota/afr/ec, not sure about
- Original Message -
On Wed, May 06, 2015 at 02:52:05AM -0400, Krishnan Parthasarathi wrote:
The following threads have only the top-most frame. Is that expected?
Not sure. info threads lists the threads, don't you have anything
outside ow lwp_park?
I do. I see 2 threads 'waiting
- Original Message -
Krishnan Parthasarathi kpart...@redhat.com wrote:
On gdb'ing into one of the brick process, I see the following backtrace.
This is seen with other threads in the process too. This makes it difficult
to analyse what could have gone wrong. Is there something I
- Original Message -
On Mon, May 04, 2015 at 09:20:45AM +0530, Atin Mukherjee wrote:
I see the following log from the brick process:
[2015-05-04 03:43:50.309769] E [socket.c:823:__socket_server_bind]
4-tcp.patchy-server: binding to failed: Address already in use
This
- Original Message -
Krishnan Parthasarathi kpart...@redhat.com wrote:
We need help in getting gdb to work with proper stack frames. It is mostly
my lack of *BSD knowledge.
What problem do you run into?
On gdb'ing into one of the brick process, I see the following backtrace
If glusterd itself fails to come up, of course the test will fail :-). Is it
still happening?
Pranith,
Did you get a chance to see glusterd logs and find why glusterd didn't come up?
Please paste the relevant logs in this thread.
___
Gluster-devel
We want to place file on any subvol other than local ,in the contrary of
NUFA。
From GDB ,syncop_getxattr is on the same backtrace of
event_dispatch_epoll_handler which runing in main thread.So any response
can not come in ?
That is the short answer.
syncop_* calls need to be executed in a
All,
I am taking a shot at providing developer documentation[1] for the 'mystic'
syncop framework. People who care for this, please provide feedback to help
improve the content.
[1] - http://review.gluster.com/10365
cheers,
~KP
___
Gluster-devel
I add some private function in dht_create() as bellow.
When current pwd is not /,everything is ok,otherwise,syncop_getxattr will
be blocked(In target subvol,server3_3_getxatt did not run ).
dict_t *xattr = NULL;
loc_t loc = {0, };
memset (loc.gfid, 0, 16);
loc.gfid[15] = 1;
ret =
Following is the gerrit query that could help reviewers/maintainers
to track changes ready to be merged. Sub-maintainers should extend this
query to track component specific patches.
query string
is:open label:Code-Review=1 label:Verified+1,user=jenkins
label:Verified+1,user=1000657
/query
What I need to do is combine those two commands into a single one.
What I am thinking is, at the cli level, we would first send the attach
command, and if it returns successfully, send the tier start command.
Composing attach-tier and tier-start commands in cli is an acceptable
solution.
Kaushal is looking into this issue.
- Original Message -
Hi All,
I see glusterd SEGFAULT for my patch with the following stack trace. I see
that is not related to my patch.
Could someone look into this? I will retrigger regression for my patch.
#0 0x7f86f0968d16 in
We have not yet branched release-3.7. I intend doing that tomorrow after
we hear back from maintainers.
From 'core' glusterd there is one patch pending merge -
http://review.gluster.com/10192.
This patch blocks the release but doesn't have to block branching. I would try
to get this
in
Apologies for breaking the build. I am out of office. Please revert
review #9492.
- Original Message -
As many of you have undoubtedly noticed, we're now in a situation where
*all* regression builds are now failing, with something like this:
-
cc1: warnings being treated as
Thanks Emmanuel for posting fixes for all these issues!
http://review.gluster.org/10030
Review done, awaiting clarification on a minor question from Venky (changelog
maintainer).
http://review.gluster.org/10032
Patch has been merged.
http://review.gluster.org/10033
Review done. Changes
I have sent a patch to fix this - http://review.gluster.org/9851
This should address the test failure.
- Original Message -
tests/bugs/replicate/bug-918437-sh-mtime.t is failing upstream. It
passes if I don't disable nfs:
-TEST $CLI volume set $V0 nfs.disable on
+#TEST $CLI volume
I have provided +1. The patch looks good to me.
- Original Message -
On 25 Feb 2015, at 22:55, Justin Clift jus...@gluster.org wrote:
snip
* 6 x tests/basic/mgmt_v3-locks.t
Failed tests: 11-13
30% failure rate on this.
This one looks like it's addressed by:
All,
We have merged the Daemon management refactoring changes[1]
successfully. Thanks you for all those who reviewed it. As part
of this patch we have moved gluster-NFS, gluster self-heal daemon,
quota daemon and snapshot daemon (for serving User servicable snapshots)
into the new framework. This
On to the logistics:
When: I'm looking at sometime during the second week of May (May 11-15).
Alternately, the third week of April (April 13-19), though, I'm
concerned about being able to get it all in place before then. I'd like
to have at least one day worth of scheduled presentations,
Glusterd daemon management code refactoring is being worked on.
See http://review.gluster.org/9428 (and dependent patches) for
current status. I have added you to the list of reviewers. Hope that
is OK with you.
- Original Message -
On 2015-01-27 11:49, Vijay Bellur wrote:
Hi All,
I have updated thousand-node-glusterd.
http://www.gluster.org/community/documentation/index.php/Features/thousand-node-glusterd#Status
~kp
- Original Message -
Updated DHT2 witha couple of links in the Status section,
We're trying to implement a global option for NFS-Ganesha that'll look like
this,
gluster vol set all features.ganesha on
This is intended to disable gluster-nfs throughout the gluster trusted pool,
start NFS-Ganesha server and configure HA for NFS-Ganesha.
A dummy translator has been
While modelling daemons as proposed here, I noticed that I didn't
foresee thathow abstract data types and embedded structures
(think struct list_head)
- Original Message -
Here is the first patch, http://review.gluster.org/9278, which marks the
beginning
of the implementation phase of
[Apologies for sending an incomplete message]
While implementing daemon management code as proposed here, I found it wasn't
possible to embed structures that are abstract data types (ADT). In short,
ADT's 'real' size is unknown to the containing type and compilation fails. This
brings us to which
What level of such change do we expect in the 3.x development stream?
There are problems with glusterd that even reader-writer locks or RCU
won't solve, which is why there's already work in progress toward a 4.x
version. Perhaps it's selfish of me, but I'd like to see as much of our
effort
Thanks for the great work. It is heartening to see your diligence
paying off. Hoping to see it vote on gerrit soon.
~kp
- Original Message -
Hello everybody
I enabled (or at least tried to enableà NetBSD regression tests report
to gerrit. it does not vote for now so that wa can see
It seems simplest to store child-parent relationships (one to one)
instead of parent-child relationships (one to many). Based on that, I
looked at some info files and saw that we're already using
parent_volname for snapshot stuff. Maybe we need to change
terminology. Let's say that we use
grouped as same) may
not be safe. I am assuming you are talking about this kind of grouping.
- Original Message -
On 12/17/2014 01:54 PM, Atin Mukherjee wrote:
On 12/17/2014 01:01 PM, Lalatendu Mohanty wrote:
On 12/17/2014 12:56 PM, Krishnan Parthasarathi wrote:
I was looking
am assuming you are talking about this kind of grouping.
- Original Message -
On 12/17/2014 01:54 PM, Atin Mukherjee wrote:
On 12/17/2014 01:01 PM, Lalatendu Mohanty wrote:
On 12/17/2014 12:56 PM, Krishnan Parthasarathi wrote:
I was looking into a Coverity issue (CID 1228603
- Is there a new connection from glusterfsd (upcall xlator) to
a client accessing a file? If so, how does the upcall xlator reuse
connections when the same client accesses multiple files, or does it?
No. We are using the same connection which client initiates to send-in
fops.
So . . . about that new functionality. The core idea of data
classification is to apply step 6c repeatedly, with variants of DHT that
do tiering or various other kinds of intelligent placement instead of
the hash-based random placement we do now. NUFA and switch are
already examples of
Here are a few questions that I had after reading the feature
page.
- Is there a new connection from glusterfsd (upcall xlator) to
a client accessing a file? If so, how does the upcall xlator reuse
connections when the same client accesses multiple files, or does it?
- In the event of a
Anders,
### Abstract data types
struct conn_mgmt {
struct rpc_clnt *rpc;
int (*connect) (struct conn_mgmt *self);
int (*disconnect) (struct conn_mgmt self);
int (*notify) (struct conn_mgmt *self, rpc_clnt_event_t
for this. Thanks for offering help
to test it.
~kp
- Original Message -
Krishnan Parthasarathi kpart...@redhat.com wrote:
Thanks for testing the suggestion. Would you be sending a patch
using synclock_t?
Well, I suspect you will produce it much faster than I, hence I'd
Emmanuel,
OK, let me send out a patch for this. Thanks for offering help
to test it.
~kp
- Original Message -
Krishnan Parthasarathi kpart...@redhat.com wrote:
Thanks for testing the suggestion. Would you be sending a patch
using synclock_t?
Well, I suspect you will produce
Emmanuel,
Could you explain which sequence of function calls lead to
mutex lock and mutex unlock being called by different threads?
Meanwhile, I am trying to find one such sequence to understand
the problem better.
FWIW, glusterd_do_replace_brick is injecting an event into
the state machine
in place of
pthread_mutex for gd_op_sm_lock. synclock_t identifies the locker based on the
task and not the thread executing the 'lock/unlock' function. Thoughts?
- Original Message -
Krishnan Parthasarathi kpart...@redhat.com wrote:
Could you explain which sequence of function calls lead
The main issue I have with this, and why I didn't suggest it myself, is
that it creates a bit of a chicken and egg problem. Any kind of
server-side replication, such as NSR, depends on this subsystem to elect
leaders and store its own metadata. How will these things be done if we
create a
All,
It would be really helpful to hear some feedback on this proposal. It is
important
that we solve Glusterd's configuration replication problem at scale. Getting
this
right would help us be better prepared for the kind of changes planned for
GlusterFS 4.0.
thanks,
kp
- Original
All,
We have been thinking of many approaches to address some of Glusterd's
correctness
(during failures and at scale) and scalability concerns. A recent email thread
on
Glusterd-2.0 was along these lines. While that discussion is still valid, we
have been
considering dogfooding as a viable
- Original Message -
On Thu, 18 Sep 2014 12:33:49 -0400 (EDT)
Krishnan Parthasarathi kpart...@redhat.com wrote:
I am going to be bold and throw a suggestion inspired from
what I read today. I was reading briefly about how kernel
manages its objects using the kobject data
I am going to be bold and throw a suggestion inspired from
what I read today. I was reading briefly about how kernel
manages its objects using the kobject data structure and
establishe 'liveness' (in the garbage collection sense)
relationship across objects. It allows one to group objects
into
Emmanuel,
Pranith works on glustershd, CC'ing him.
~KP
- Original Message -
Emmanuel Dreyfus m...@netbsd.org wrote:
Here is the problem: once readdir() has reached the end of the
directory, on Linux, telldir() will report the last entry's offset,
while on NetBSD, it will report
Bala,
I think using Salt as the orchestration framework is a good idea.
We would still need to have a consistent distributed store. I hope
Salt has the provision to use one of our choice. It could be consul
or something that satisfies the criteria for choosing alternate technology.
I would wait
from the syncenv).
In summary, the call back code triggers the scheduling back of the paused task.
HTH,
KP
- Original Message -
On Wed, Sep 10, 2014 at 05:32:41AM -0400, Krishnan Parthasarathi wrote:
Let me try to explain how GD_SYNCOP works. Internally, GD_SYNCOP yields the
thread
1 - 100 of 124 matches
Mail list logo