Hello Guillaume,

Thanks a lot for your quick answer.

2009/3/31 Guillaume Lerouge <[email protected]>

> Hi Keerthan,
>
> On Tue, Mar 31, 2009 at 1:21 AM, Keerthan MUTHURASA <
> [email protected]> wrote:
>
> > Hello,
> >
> > Many thanks for all these helpfull details.
> >
> > 2009/3/30 Guillaume Lerouge <[email protected]>
> >
> > > Hi Keerthan,
> > >
> > > thanks for your interest in XWiki & the GSoC. I'll try answering some
> of
> > > your questions below.
> >
> >
> > > On Sat, Mar 28, 2009 at 9:30 PM, Keerthan MUTHURASA <
> > > [email protected]> wrote:
> > >
> > > > Hello,
> > > >
> > > > I am Keerthan Muthurasa , Msc Software Engineering student at Oxford
> > > > Brookes
> > > > University , I am interested on
> > > > doing a project for xwiki.
> > > >
> > > > I would like to discuss about my ideas for the "Import Export from
> any
> > > > other
> > > > Wiki Project" and if you could in return give me your opinions
> > > > that would be really helpfull for me.
> > > > This is the project requirement :
> > > >
> > > >
> > >
> >
> ____________________________________________________________________________________________________________
> > > > Import Export from any other
> > > > Wiki<
> > > >
> > >
> >
> http://dev.xwiki.org/xwiki/bin/view/GoogleSummerOfCode/ImportExportfromanyotherWiki2009
> > > > >
> > > >
> > > >  Create a extensible framework to import export data between wikis.
> > This
> > > > should handle converting the data in the pages including
> > > > links between pages and metadata as well as direct access to the data
> > > > through either a web service (prefered) or database or the
> > > > file system
> > > >
> > > > The system should at least for MediaWiki and Confluence in import
> mode
> > > >
> > > >
> > >
> >
> ____________________________________________________________________________________________________________
> > > >
> > > > I will begin with some questions:
> > > >
> > > > * What does it mean when talking about converting links between pages
> (
> > > we
> > > > are talking about converting internal links in the source wiki isn it
> > ?,
> > > > That's mean when importing or exporting data we should think about
> > > > exporting
> > > > or importing the linked data as well in order to keep an integrity).
> > >
> > >
> > > Indeed. Most of the time, the use case will be to import a full wiki
> > rather
> > > than subparts, thus links would be preserved. If you want to let users
> > > import/export only subarts of a wiki (such as a space or a single
> page),
> > > you
> > > should provide them with a warning that some links will be broken
> rather
> > > than trying to import all pages that are linked to. Or you could make
> > > importing liked to pages an option. It could result in surprised users
> if
> > > someone tries to export / import one page and ends up with the 76 pages
> > > that
> > > page linked / was linked to ;-)
> >
> >
> > I understand, I will keep in mind these details.
> >
> >
> > >
> > > Since the most common use case is to import a full wiki, it shouldn't
> be
> > > much of an issue.
> > >
> > > > * What does it mean when talking about exporting metadata ,direct
> > access
> > > to
> > > > data through either a web service or database or file system ?
> > >
> > >
> > > Some metadata can be conserved across systems. For instance, the date
> > when
> > > the page was created, its edition date and its previous versions might
> > need
> > > to be preserved (if that's technically feasible). Thus it basically
> means
> > > taking care of all the information associated with the page other than
> > its
> > > content.
> > >
> > > > Here my idea for the project , if I can have some feedback it would
> be
> > > > helpfull for me:
> > > >
> > > >       When exporting or importing data from a given wiki to a
> > > destionation
> > > > one
> > > >        Setp 1: get rid of all specific synthax proper to the source
> > wiki
> > > > and retrieve data,metadata, and other usefull information.This can be
> > > > achieved
> > > >                   using a kind of parser whose job is to scan the
> > source
> > > > page and reconize  the specific synthax and only retrieve proper
> > > > data.Concerning encountered links ,we should
> > > >                   convert theses pages as well but we have to be
> > carefull
> > > > when cross linked ( for instance we are converting page A and A links
> > to
> > > B
> > > > but when
> > > >                   converting B ,B links to A).
> > >
> > >
> > > You could start by looking at the XWiki 2.0 syntax and see everything
> it
> > > allows. I think that when trying to convert pages from other wikis
> > > (specifically Confluence) you will run in the following issue: some
> pages
> > > use macros that are defined elsewhere on the system and won't work
> > > correctly
> > > when imported in XWiki.
> >
> >
> > I already had a look at Xwiki 2.0 syntax.
> >
> >
> > > For pure content, you should be able to import it all in XWiki without
> > much
> > > of a problem. For content generated by a script, you could try to
> > identify
> > > it and then issue warnings in your output such as "this is specific
> > content
> > > that couldn't be converted".
> > > See my answer above about retrieving the content of linked to pages.
> >
> >
> > I had a look at some of the previous threads in the mailling list
> regarding
> > import / export feature.
> >
> >
> > >
> > > >       Step 2: adopt a datacentric approach  to properly store data in
> a
> > > > such a way that is easy to retrieve them.We have to be carefull when
> > > > storing
> > > > data
> > > >                  since they have to keep the original pages
> structure.
> > >
> > >
> > > Have you already looked at the content of a MediaWiki, Confluence and
> > XWiki
> > > export file?
> >
> >
> > Nop, I did not but I had a litle idea about the format since several
> > parsers  ( XWikiParser , MediaWikiParser ...) are dealing with DOM.
> > Where Can I get these differents exported files ? What is the usual case
> > when getting this files ? Are we using any export utilities from the
> source
> > wiki in order to get an xml format file ? I will investigate on that for
> > MediaWiki and Confluence.
> >
> > You can download Confluence from this page:
>
> http://www.atlassian.com/software/confluence/ConfluenceDownloadCenter.jspato
> install it locally and play with it a bit. You could specifically give
> a
> look to http://confluence.atlassian.com/display/DOC/Confluence+to+XML .
> Similar documentation is probably available from MediaWiki as well but
> you'll have to look it up by yourself ;-)
>

Thanks a lot I am having a look at MediaWIki Export format.I will setup
Confluence as well.


>
> > > In XWiki's case, data is stored in a XML format. It might be
> > > the same for Confluence & MediaWiki. If it is, you might be able to use
> > > XSLT
> > > to convert one XML format to another.
> > >
> >
> > I had a look at some of your previous discussion concerning the export /
> > import feactures.
> > If I properly understood:
> >
> > XWikiParser  => Transform XWiki format text into a DOM representation
> > MediaWikiRenderer => Render a MediaWiki format text from a DOM
> > representation
> > MediaWikiParser => Transform MediaWiki format text into a DOM
> > representation
> > XWikiRenderer => Render a XWiki format text from a DOM representation
> >
> > Using the same idea it's  possible to do the same think for any other
> wikis
> > if we aware of this wiki's syntax.
> >
> > Within the wikiModel I found some references to
> >     org.wikimodel.wem.xwiki.XWikiParser
> >     org.wikimodel.wem.mediawiki.MediaWikiParser
> >     org.wikimodel.wem.jspwiki.JspWikiParser
> >     org.wikimodel.wem.creole.CreoleWikiParser
> >
> > Where can I find the source code for these elements ?
>
>
> http://code.google.com/p/wikimodel/source/browse/#svn/trunk/org.wikimodel.wem/src/main/java/org/wikimodel/wem/xwiki
>

Great , I just had a look at the source code and I am delighted to see that
it's exactly what I have been doing during these last 3 month as
part of Compiler Construction Code with professor *Hanspeter Mössenböck*.I
wrote a compiler for a simplifier java looks like language.
The idea is more or less the same here,
I will give more details about how I plan to do it in my proposal.


>
>
> > There were some issues concerning incompatible syntax between wikis in
> the
> > discussion.Specially
> > issues concerning syntax that can exist in some wiki and does not exist
> on
> > other.(Example of confluence that is quite restrictive ,macro problem as
> > refered by Guillaume). Are they any solutions found for this kind of
> issues
> > or should you just warn that some information will be ommited ?
>
>
> I think that trying to convert everything is too idealistic. In the case of
> the Office Importer, content that cannot be converted properly is stripped
> and warnings are issued for unsupported macros for instance. I'll let Asiri
> tell you more about the Office Importer conversion behavior if needed.
>

It could be helpfull for me if I could have some feedback from Asiri.


>
> > As far as I can see there are works already done for export / import
> > feature
> > so what's wrong with the existing work ? There are a lot of changes
> > betweeen
> > xWiki 1.0 and 2.0 syntax.I guess XWikiParser and XWikiRenderer has been
> > modified according to these changes ?
>
>
> XWiki 1.0 syntax was using the Radeox parser while XWiki 2.0 syntax is
> using
> WikiModel. You would definitely be working with WikiModel a lot, improving
> WikiModel's Confluence & MediaWiki syntax parsers so that they can
> eventually issue XDOM elements.


alright


>
>
> > To finish with my long list of questions ( sorry about that , I am just
> > trying to understand the existing work), Can I have an use case for
> > importing data from Confluence to xWiki ? ( from getting input data file
> to
> > expected result in xWiki).
>
> You can get an input file from the Confluence instance you will have
> installed on your machine. You can also give a look to XE 1.8 default XAR
> (available to download from XWiki.org) to see what the expected result
> looks
> like


thank you Guillaume.
Keerthan

.
> Guillaume
>
>
> > Many thanks again for your answer.
> >
> > Best regards,
> > Keerthan Muthurasa
> > Msc Software Engineering,
> > School of Technology,
> > Oxford Brookes University
> >
> > >
> > > >       Step 3: use previously retrieved data to create the result page
> > in
> > > > the destination wiki using the wiki specific synthax for destination
> > > wiki.
> > >
> > >
> > > See my answer above.
> > >
> > > > I am having a look at wikiModel that seems to contain a parser.I am
> > also
> > > > trying to understand Plexus.
> > > >
> > > > A Many thanks for your advices.
> > >
> > > Hope this helps,
> > > Guillaume
> > >
> > >
> > > > Regards,
> > > > Keerthan Muthurasa
> > > > _______________________________________________
> > > > devs mailing list
> > > > [email protected]
> > > > http://lists.xwiki.org/mailman/listinfo/devs
> > > >
> > >
> > >
> > >
> > > --
> > > Guillaume Lerouge
> > > Product Manager - XWiki
> > > Skype ID : wikibc
> > > http://guillaumelerouge.com/
> > > _______________________________________________
> > > devs mailing list
> > > [email protected]
> > > http://lists.xwiki.org/mailman/listinfo/devs
> > >
> > _______________________________________________
> > devs mailing list
> > [email protected]
> > http://lists.xwiki.org/mailman/listinfo/devs
> >
>
>
>
> --
> Guillaume Lerouge
> Product Manager - XWiki
> Skype ID : wikibc
> http://guillaumelerouge.com/
> _______________________________________________
> devs mailing list
> [email protected]
> http://lists.xwiki.org/mailman/listinfo/devs
>
_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

Reply via email to