Re: [sugar] Using threads in an Activity

2008-07-20 Thread Hemant Goyal
Hi Shikar,

You might want to look at how James Simmons uses threads in his ReadEtexts
Activity to do speech synthesis using a thread too..

http://dev.laptop.org/git?p=activities/readetexts;a=blob;f=ReadEtextsActivity.py;h=c8c7086a4dd9448a2f81b96150dcb551c6d5d217;hb=HEAD

Around line 56 or so...

Hope it helps :)

Best,
Hemant
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar


[sugar] super user privileges for speech-dispatcher and location of configuration files for olpc

2008-07-03 Thread Hemant Goyal
Hi,

We want to run the speech-dispatcher daemon service on the XO for providing
a speech synthesis environment in the laptop. For our purpose we want to
modify the configuration file of speech-dispatcher in
/etc/speech-dispatcher/speechd.conf from sugar-control panel. sugar-control
panel requires the configuration file to be user writable (which is not the
case with the file in /etc/speech-dispatcher).

The idea that we are getting at the moment is to maintain a copy
of /etc/speech-dispatcher/speechd.conf somewhere in
/home/olpc/.folder/speechd.conf. and start the daemon service by pointing to
this directory instead of /etc/speech-dispatcher. This would allow us to
modify the configuration file from sugar-control panel.

For the above idea I will have to modify the speech-dispatcher package (for
olpc2, olpc3) to relocate the configuration file to the new user writable
directory and modify the init script for speech-dispatcher to look
for the configuration file in the new directory.

Can anybody suggest alternatives, or any caveats regarding the suggested
approach?

Thanks!
Hemant
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar


Re: [sugar] next sugar meeting: text-to-speech

2008-03-19 Thread Hemant Goyal
Hi all,

My apologies for not being present for this sugar meeting (my school exams
got over today).

* santhosh and HFactor present Dhvani, a text-to-speech system for
 indian languages.
o santhosh has already worked into integrating Dhvani into Speech
 Dispatcher.


This is great news! The packaging of speech dispatcher and dotconf is going
on, and we'll pick up some speed now that we are free from exams. One of the
dependencies of speech-dispathcer,DOTCONF is already approved and the author
of the package has been sponsored by a FEDORA Packaging mentor. The next
package to be accepted will be speech-dispatcher.


 in today meeting we have agreed on the convenience of having a meeting
 mainly focused on the use of text-to-speech systems in Sugar. If
 everybody agree, will take place Tuesday March 25 at 15.00 UTC.


Brilliant,! I've added a few points that I think I would like to discuss
with the sugar team.

I'm sending this to the Accessibility list in the hope that someone
 will be interested in contributing in this area.

 If you are interested in attending, please add to the attendants and
 topics in:


 http://wiki.laptop.org/go/Sugar_dev_meeting#Tuesday_March_25_2008_.28.22TTL-meeting.22.29_exceptional_time:_15.00_.28UTC.29


Just for an update, I tried running orca with speech-dispatcher and at a
high level it worked fine. However since we have not been able to integrate
basic speech synthesis in sugar, I have avoided researching about this
option just yet.

Thank you for keeping a special meeting for TTS!

Best,
Hemant
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar


Re: [sugar] Interested in the Google Summer of Code

2008-03-19 Thread Hemant Goyal
Hi all,

As you all would be already aware that we have been
planning/designing/experimenting with speech synthesis tools for the XO.

So after all the designs (and next Sugar Meeting for TTS) I'd like to
undertake the project and write code to integrate speech synthesis into
Sugar in the coming summer. I think the work will mostly involve playing
with sugar-control-panel, and a little bit with GTK Selections for the
highlight and speak option. So it'll be really nice if someone from the
Sugar Team becomes a mentor for the project.

I've already put a basic idea plan at
http://wiki.laptop.org/go/Summer_of_Code/Ideas#Integration_of_Speech_Synthesis_Technology_into_Sugar

Thanks!
-- 
Hemant
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar


Re: [sugar] Control-panel GUI design

2008-03-17 Thread Hemant Goyal
Hi,

I had posted about the Speech Synthesis Control Panel as well. Can we also
discuss the inclusion of speech synthesis control parameters within this
control panel, while this discussion is alive?

A link to my previous post -
http://lists.laptop.org/pipermail/sugar/2008-March/004411.html

Hi,

 I added the section to the sugar-control-panel page so we can discuss
 the designs for the detail pages here.


 http://wiki.laptop.org/go/Sugar_Control_Panel#GUI_for_the_command_line_tool


Thanks,
Hemant
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar


Re: [sugar] Speech Synthesis Integration - User Interfaces and other Implementation Considerations

2008-03-09 Thread Hemant Goyal
Hi,

Can someone please take a look at this mail and provide some guidance
please.

Thanks!
Hemant

On Sat, Mar 1, 2008 at 11:18 PM, Hemant Goyal [EMAIL PROTECTED]
wrote:

 Hi,

 I was thinking while the packaging of speech dispatcher continues I could
 finalize certain UI considerations for speech synthesis. I had a word with
 Tomeu and he advised me to write all the points in a mail to the list.

 Particularly we want to focus on :

1. A Speech Configuration Management for Sugar
   1. Provision of a control panel for modifying speech synthesis
   parameters
   2. How these parameters will be stored and retrieved when
   changes are made
   3. What parameters to expose?
   1. Language - Perhaps this should be the sugar default?
  2. Voice Selection - Male/Female, Child/Adult, Age
  3. Rate
  4. Pitch
  5. Volume
   2. GUI considerations
   1. A Speech Synthesis Button
  1. Has many states - Play/Stop (Pause?)
  2. Reveals a control panel for modifying the speech
  synthesis parameters and provides a text box for getting some text 
 data for
  speech synthesis?
  2. What to text to send for speech synthesis?
  1. If some text is highlighted then that text should be
  sent
  2. If no text is highlighted and speech synthesis button
  is clicked
 1. Send data of some active window and provide
 karaoke style highlighting of text?
 2. Continue synthesis until the end of the
 document or stop button is pressed
  3. Possibly a Speech Synthesis keyboard shortcut too -
Should effect the Speech Synthesis button
4. Speak out a welcome message to the child when the XO boots up?
(Hello xyz welcome to sugar or something like that?)

 Please share any other ideas which you think can improve the User
 Experience wrt speech synthesis.

 I'd like to write the patches and wrap up the coding by the time speech
 dispatcher RPMs are ready so that we can roll this feature in the XOs and
 get some feedback :)

 Thanks!
 --
 Hemant Goyal
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar


Re: [sugar] New multilingual dictionary activity

2008-03-09 Thread Hemant Goyal
Hi,


 Next steps:

   * I have a Speak button that I'd like to hook up to espeak with the
 appropriate accent loaded.  Josh, any interest in helping to get
 that working?  Should I just wait for the speech server?


The speech dispatcher API is what you can use for the speech synthesis. The
python API is accessible here :
http://cvs.freebsoft.org/repository/speechd/src/python/speechd/client.py?view=markup

I am still trying to get the speech dispatcher and its dependencies packaged
for Fedora :\. And that is the only reason why the speech server is still
not present on the XO :(. I really request an expert to look into this and
help speed up the work.

In the long run, we plan to explore other Text to Speech engines other than
eSpeak (because of voice quality issues). So coupling your app with eSpeak
is not so sensible when you can directly use speech-dispatcher for all
testing.

I have a few unapproved speech-dispatcher packages here:
http://www.nsitonline.in/hemant/stuff/speechd-rpm/

You'll need dotconf packages to install speech dispatcher. Perhaps Assim can
help you get a *usable* dotconf pacakge.

I'd gladly hook your activity to speech dispatcher and make it self voicing
_once_ speechd gets accepted in the OLPC build.

Thanks!
Hemant
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar


[sugar] An Update about Speech Synthesis for Sugar

2008-02-18 Thread Hemant Goyal
Hi,

It s great to see many other developers sharing the idea we have been trying
to implement right within the Sugar Environment.

We have been working on integrating speech-synthesis into Sugar for quite
some time now. You can check out our ideas here :
http://wiki.laptop.org/go/Screen_Reader

We are also documenting all our ideas and requirements with respect to
Speech Synthesis in this Requirements Analysis Document here :
http://www.nsitonline.in/hemant/stuff/Speech%20Synthesis%20on%20XO%20-%20Requirements%20Analysis%20v0.3.5.pdf

It outlines some of our immediate as well as long term goals wrt
speech-synthesis on the XO. Your ideas, comments and suggestions are
welcome.

I'd like to update the list about our progress:

   1. speech-dispatcher has been selected as a speech synthesis server
   which will accept all incoming speech synthesis requests from any sugar
   activity (example: Talk N Type, Speak etc)
   2. speech-dispatcher provides a very simple to use API and client
   specific configuration management.

So whats causing the delays?

   1. speech-dispatcher is not packaged as an RPM for Fedora, so at
   present I am mostly making a RPM package so that it can be accepted by the
   Fedora community and ultimately be dropped into the OLPC Builds. You can
   track the progress here :
   https://bugzilla.redhat.com/show_bug.cgi?id=432259 I am not an expert
   at RPM packaging and hence its taking some time at my end. I'd welcome
   anyone to assist me and help speed up the process.
   2. dotconf packages which speech-dispatcher is being packaged by my
   team mate Assim. You can check its progress here :
   https://bugzilla.redhat.com/show_bug.cgi?id=433253

Some immediate tasks that we plan to carry out once speech-dispatcher is
packaged and dropped into the OLPC builds are :

   1. Provide the much needed play button, with text highlight features
   as discussed by Edward.
   2. Port an AI Chatbot to the XO and hack it enough to make it speak to
   the child :).
   3. Encourage other developers to make use of speech-synthesis to make
   their activities as lively and child friendly as possible :)
   4. Explore orca and other issues to make the XO more friendly for
   blind/low-vision students

@James : We envision that speech-synthesis will surely get integrated with
Read in due time. I think it would be great if maybe Gutenberg text could be
loaded right from Read only?

I was not planning on anything so fancy.  Basically, I was frustrated
 that I had a device that would be wonderfully suited to reading
 Gutenberg etexts and no suitable program to do it with.  I have written
 such an Activity and am putting the finishing touches on it.  As I see
 it, the selling points of the Activity will be that it can display
 etexts one page at a time in a readable proportional font and remember
 what page you were on when you resume the activity.  The child can find
 his book using the Gutenberg site, save the Zip file version to the
 Journal, rename it, resume it, and start reading.  It will also be good
 sample code for new Activity developers to look at, even children,
 because it is easy to understand yet it does something that is actually
 useful.  I have written another Activity which lets you browse through a
 bunch of image files stored in a Zip file, and it also would be good
 sample code for a new developer, as well as being useful.


Warm Regards,
Hemant
___
Sugar mailing list
Sugar@lists.laptop.org
http://lists.laptop.org/listinfo/sugar