.:
17-A: Title
17-B: Title
In other words, there is 1:1 pairing of section titles in the teacher's guide
and the workbook.
Is there any way to tell ConTeXt to display all of the pages for a particular
section?
\externalfigure[filename.pdf][section=17A]
--Joel
Hi,
these line can modify the space between words:
\installtolerancemethod
{horizontal}
{MySpace}
{\spaceskip2.75pt plus1.60pt minus1.10pt\relax}
\setupalign[line,block]\setfalse\raggedonelinerstate% kein hz=expansion mehr?
\setuptolerance[MySpace]
But how can I adjust in general
the comma
is part of the keyword, this means you can't change the order of the
words and even spaces
after the comma make the argument invalid.
Wolfgang
___
If your question is of interest to others as well, please ad
, when I posted a question on StackExchange.com regarding
adding support for another language (German) to the "Words"
conversion in ConTeXt (see here:
https://tex.stackexchange.com/questions/657534/how-to-add-support-for-another-language-german-to-the-words-conversion-in-cont),
@mickep
ch, I found the recommendation in
> Forssman, de Jong: Detailtypografie (4. Aufl, 2008, S. 124f.) to
> use 2/3 for German justified texts, and 3/4 to 5/5 for ragged text.
> They also recommend to never hyphenate words with 5 letters, not sure
> that can be encoded.
Hi Leah,
Hans added hyp
er evolved or not.
>
> \placefigure
> [][fig:church]
> {Stephanus Church.}
> {\externalfigure[ma-cb-24][width=.4\textwidth]}
>
> \stopprettyblock
> \setupindenting[next]
> Below, we have two separate columns; but up here, for the nonce, we have but
> the one.
>
> \star
t may be
> reasonable. But there are more experienced German typographers on
> this list who can chime in.
After some research, I found the recommendation in
Forssman, de Jong: Detailtypografie (4. Aufl, 2008, S.
all in the first column
\column
Words, words, words \dots
all in the second column.
And look ye here! Even more words!
\stopcolumns
\stoptext
Many regards
Uschi
-Ursprüngliche Nachricht-
Von: ntg-context Im Auftrag von Oliver Sieber via
ntg-context
Gesendet: Mittwoch, 19
h
> % !TEX useConTeXtSyncParser
>\setupsynctex[state=start, method=max]
>
> Syncing works well enough—though I can see no difference between method =min
> and method=max, both highlight only a few words and not the entire text to be
> synced, but perhaps my expectati
between method
=min and method=max, both highlight only a few words and not the entire
text to be synced, but perhaps my expectations are out of line.
When I have these lines at the top of a component file and typeset the
product file, I get a rootfile.synctex file, but syncing itself goes awry.
Nothing
l.73 \registerctxluafile{luat-cod}{}
? x
181 words of node memory still in use:
1 dir, 22 glue_spec nodes
avail lists: 2:3
No pages of output.
Transcript written on cont-en.log.
$ fmtutil --all
bash: fmtutil: command not
On 10/10/2022 9:21 PM, Marcus Christopher Ludl via ntg-context wrote:
Hello all,
this is my first contribution to this mailing list.
Recently, when I posted a question on StackExchange.com regarding adding
support for another language (German) to the "Words" conversion in
ConTeXt
Hello all,
this is my first contribution to this mailing list.
Recently, when I posted a question on StackExchange.com regarding adding
support for another language (German) to the "Words" conversion in
ConTeXt (see here:
https://tex.stackexchange.com/questions/657534/how-to-a
man]{babel}
\begin{document}
\showhyphens{Zusammenhang}
\showhyphens{anderswo}
\showhyphens{anderswoher}
\end{document}
This shows
[] \TU/lmr/m/n/10 Zu-sam-men-hang
[] \TU/lmr/m/n/10 an-ders-wo
[] \TU/lmr/m/n/10 an-ders-wo-her
The LaTeX hyphenation points agree with the German Duden dictionary.
As non
… \replaceword should be the correct way for proper hyphenation??
st.
> Am 14.09.2022 um 08:35 schrieb Max Chernoff :
>
>
> Hi Steffen,
>
>> The idea is to set the hyphenation for certain words regardless of the
>> language that is used in the surrounding paragra
Hi Steffen,
> The idea is to set the hyphenation for certain words regardless of the
> language that is used in the surrounding paragraphs.
>
> In this example it should stay: «steff-en»
>
> How do i set this to all non-english paragraphs (without using
> \hyphenation on
.
The idea is to set the hyphenation for certain words regardless of the language
that is used in the surrounding paragraphs.
In this example it should stay: «steff-en»
How do i set this to all non-english paragraphs (without using \hyphenation on
each language-switch)?
Best,
Steffen
—
\starttext
Hi,
please have a look at this minimal example.
The idea is to set the hyphenation for certain words regardless of the language
that is used in the surrounding paragraphs.
In this example it should stay: «steff-en»
How do i set this to all non-english paragraphs (without using \hyphenation
gt;
> On Fri, 12 Aug 2022, Marcin Ciura via ntg-context wrote:
>
> > Dear list,
> >
> > I am typesetting [nothyphenated,flushleft] text in two columns. Some
> words
> > stick through the right margin of the left column, sometimes even
> > overlapping the rig
-context] Keep the right margin in [nothyphenated, flushleft]
columns
On Fri, 12 Aug 2022, Marcin Ciura via ntg-context wrote:
> Dear list,
>
> I am typesetting [nothyphenated,flushleft] text in two columns. Some words
> stick through the right margin of the left column, sometimes even
&
On Fri, 12 Aug 2022, Marcin Ciura via ntg-context wrote:
> Dear list,
>
> I am typesetting [nothyphenated,flushleft] text in two columns. Some words
> stick through the right margin of the left column, sometimes even
> overlapping the right column. How can I make Context obey t
Dear list,
I am typesetting [nothyphenated,flushleft] text in two columns. Some words
stick through the right margin of the left column, sometimes even
overlapping the right column. How can I make Context obey the right margin?
The MWE is below.
Here is the output PDF:
https
otes (and thus in my book) I have the information that
fullhz enables stretching of *glue*, and I thought that would mean
whitespace outside of words.
Please enlighten me?
Hraban
___
If your question is of intere
Mensaje reenviado
Asunto: Re: [NTG-context] Malfunctioning of syllabic partitioning of
words in Spanish
Fecha: Thu, 7 Jul 2022 08:26:49 +0200
De: Joaquín Ataz López
Organización: Universidad de Murcia
Para: Max Chernoff
Thank you very much for the quick
I am writing a document in Spanish and I notice that the syllable
partitioning of words does not conform to the rules of the language. And
so, for example, the word "limitarse" is partitioned as "lim-itarse"
(the correct one is "li-mi-tar-se"), "colores&quo
I am writing a document in Spanish and I notice that the syllable
partitioning of words does not conform to the rules of the language. And
so, for example, the word "limitarse" is partitioned as "lim-itarse"
(the correct one is "li-mi-tar-se"), "colores&quo
= {
name= "test-over-suffix-boundary",
options = {
{
patterns = {fi = "f|i",},
words = [[ wolf ]],
suffixes = [[
in'
in’
]],
},
},
}
-- Testfile for fi ligature in word with apostroph
local testwithsuffix =
would suppress ligatures at the
suffix boundary?
Maybe that's not the case. If so, then it means having to define all the *f-ing
words (heh) a few times for the different suffixes (in', in’, and ing), which
seems to defeat the purpose of s
s not the case. If so, then it means having to define all the
*f-ing words (heh) a few times for the different suffixes (in', in’, and
ing), which seems to defeat the purpose of separating suffixes?
Help is appreciated.
> On 6 Jun 2022, at 06:37, Thangalin via ntg-context wrote:
>
> Attached are tweaked endings for words like "wolf" to include contracted
> endings, but they are being ignored. This makes for a minor inconsistency:
>
> wolfing -- no ligature
> wo
Attached are tweaked endings for words like "wolf" to include contracted
endings, but they are being ignored. This makes for a minor inconsistency:
wolfing -- no ligature
wolfish -- no ligature
wolfin -- no ligature (incorrect spelling, though)
wolfin' -- ligature
Any idea
> message in the console as me.
>
> Excuse me: you want me to download a huge file, from an unknown author,
> that may or may not contain all sorts of things. You want me to compile
> this file on my computer. To obtain an error message that you could have
> mentioned in your first mess
y computer. To obtain an error message that you could have
mentioned in your first message. Why? Why don't you search the list
archives with the key words "enlarge capacity"? I still don't understand
what you expec
the formatting of Lua multi-line strings messes up the source structure,
in the following MWE, the "one" is displayed after "[[":
\starttext
\startLUA
words = [[
one
two
three
]]
\stopLUA
\stoptext
I can reproduce this. As a workaround, you can insert a non-bre
Hi,
the formatting of Lua multi-line strings messes up the source structure,
in the following MWE, the "one" is displayed after "[[":
\starttext
\startLUA
words = [[
one
two
three
]]
\stopLUA
\stoptext
(Denis recognized that in his MAPS/CGJ article on ligatur
t}]
\setupcaption[figure][align={justified,hz,hanging}]
I think captions are not justified by default, so that might or might
not matter to you. Bibliographies are usually full or names and
academic words that do not hyphenate well, so adding tolerant relaxes
that a little bit.
Cheers, Henri
>
ure][align={justified,hz,hanging}]
>
> I think captions are not justified by default, so that might or might
> not matter to you. Bibliographies are usually full or names and
> academic words that do not h
ng}]
\setupbtxlist[align={hz,hanging,tolerant}]
\setupcaption[figure][align={justified,hz,hanging}]
I think captions are not justified by default, so that might or might
not matter to you. Bibliographies are usually full or names and
academic words that do not hyphenate well, so ad
; 88 U+00336 ̶ COMBINING LONG STROKE OVERLAY
fonts > stop missing characters
My questions are as follows:
1. What does 88 mean here? 88 instances of this missing glyph? Page 88?
Or something else? or in other words, how might I discover where
something is missing from the orig
> stop missing characters
My questions are as follows:
1. What does 88 mean here? 88 instances of this missing glyph? Page 88?
Or something else? or in other words, how might I discover where
something is missing from the original text?
2. Tex Gyre Pagella is normally a pretty handy typefa
t;,
options = {
{
patterns = {
fio = "f|io",
},
words = [[ fioot fiots ]],
},
{
patterns = {
Th = "T|h",
{
> {
> patterns = {
> fio = "f|io",
> },
> words = [[ fioot fiots ]],
> },
> {
> patterns = {
>
demo = {
name= "demo",
options = {
{
patterns = {
fio = "f|io",
},
words = [[ fioot fiots ]],
},
{
patterns = {
at it's so overwhelming for a newbie.
I really like the way wherebibtextreats words enclosed in curly
braces are ignored.
Or are there something that's less aggressive than\WORDso the LaTeX
trick works:
\def\NoCaseChange#1{\noexpand\NoCaseChange{\noexpand#1}}
\starttext
\protected\def
iddle-aged, elderly, and the
boundaries are rather flexible for these. Among the young category,
English might distinguish infants, children, adolescents young
adults. \rightarrow\enspace \in{giovani}[giovani]
\stopentry
In other words, there is an entry called 'giovani' and it begins
\starte
the young category, English might
distinguish infants, children, adolescents young adults.
\rightarrow\enspace \in{giovani}[giovani]
\stopentry
In other words, there is an entry called 'giovani' and it begins
\startentry[title={giovani},marking={giovani},reference={giovani}]. That
correctl
, and the boundaries
are rather flexible for these. Among the young category, English might
distinguish infants, children, adolescents young adults.
\rightarrow\enspace \in{giovani}[giovani]
\stopentry
In other words, there is an entry called 'giovani' and it begins
\startentry[title={giovani},marking
SVG generators.
>> Renjin uses JFreeSVG when exporting as SVG. As you pointed out, there are no
>> issues with R because it will export an SVG file without any double
>> semicolons.
>>
>> In other words, try this:
>>
>> \startbuffer[svg]
>>
&
:
>
> Hi Heinrich,
>
> Use the SVG I provided. R and Renjin use two different SVG generators. Renjin
> uses JFreeSVG when exporting as SVG. As you pointed out, there are no issues
> with R because it will export an SVG file without any double semicolons.
>
> In other words
Hi Heinrich,
Use the SVG I provided. R and Renjin use two different SVG generators.
Renjin uses JFreeSVG when exporting as SVG. As you pointed out, there are
no issues with R because it will export an SVG file without any double
semicolons.
In other words, try this:
\startbuffer[svg
finedfont[Serif*default:blocklig]
> >
> > The This These are missing the `h'
> >
> > \stoptext
> I'll fix it but it's not the way to do it in lmtx where we have
>
> \startluacode
> local demo = {
> n
patterns = {
fio = "f|io",
},
words = [[ fioot fiots ]],
},
{
patterns = {
fio = "t|h",
},
words = [[ this that ]],
},
-cap.luafile to get a clue. But I used to use TeX
exclusively
and I have to admit that it's so overwhelming for a newbie.
I really like the way wherebibtextreats words enclosed in curly braces
are ignored.
Or are there something that's less aggressive than\WORDso the LaTeX
trick works:
\def
exclusively
and I have to admit that it's so overwhelming for a newbie.
I really like the way where bibtex treats words enclosed in curly braces are
ignored.
Or are there something that's less aggressive than \WORD so the LaTeX trick
works:
\def\NoCaseChange#1{\noexpand\NoCaseChange{\noexpand#1
ce Horrocks
> Hampshire, UK
>
>
>
> --
>
> Message: 4
> Date: Wed, 13 Apr 2022 07:37:58 +0530
> From: śrīrāma
> To: mailing list for ConTeXt users , A A
>
> Subject: Re: [NTG-context] Proper formatting
al].}\GetPar}
\defineitemgroup
[pitemize]
[command=\Word,numberconversion=words]
\starttext
\startpitemize[n]
\citem first item
\citem second item
\citem third item
\pitem fourth item
\stoppitemize
\stoptext
>
> post
> suggests using *\loadspellchecklist. *However, on of the arguments to
> this command includes a text file listing - and brace yourself - *all of
> the correctly spelled words*. I find this both an amusing and tragic
> proposition, since I basically need to spellcheck ba
with ConTeXt control
sequences.
The second option as shown on this StackExchange post suggests
using \loadspellchecklist. However, on of the
arguments to this command includes a text file listing - and
brace yourself - all of the cor
- *all of the
correctly spelled words*. I find this both an amusing and tragic
proposition, since I basically need to spellcheck based on *every word in a
given language.*
What options are out there for someone who would like to do serious
spellchecking using ConTeXt on Windows platform, using Powershell
picture cg;
> cg := outlinetext.d
> ("\ssbfd \headnumber[chapter]")
> (withcolor transparent(1,0.5,blue));
> fill boundingcircle cg withcolor transparent(1, 0.5, lightgray);
> draw cg;
> \stopuseMPgra
)
(withcolor transparent(1,0.5,blue));
fill boundingcircle cg withcolor transparent(1, 0.5, lightgray);
draw cg;
\stopuseMPgraphic
Using \headnumber[..][..] any 'conversion' applied to the chapter number
(romannumerals, wor
transliteration mechanisms (both
internal one and the 3rd party module from Philipp Gesang)
don't process is that Љ, Њ and Џ are transliterated to Lj, Nj and Dž in
normal words that start the sentence, or in names that normally start
with a capital letter,
but in titles written in all capitals
party module from Philipp Gesang)
don't process is that Љ, Њ and Џ are transliterated to Lj, Nj and Dž in
normal words that start the sentence, or in names that normally start
with a capital letter,
but in titles written in all capitals they should be transliterated to
LJ, NJ and DŽ.
So, the quick
of the danger of homonyms, and if you go for
a title like Manual for Polymathematicians, then the word is being
wrongly used. There is a word 'polymath' in English, but not
'polymathematician', unless of course you make it clear that it is
merely a play on words. But personally, I'd avoid that.
sure
Manual for Polymathematicians, then the word is being
wrongly used. There is a word 'polymath' in English, but not
'polymathematician', unless of course you make it clear that it is
merely a play on words. But personally, I'd avoid that.
Julian
On 26/1/22 20:36, Hans Hagen via ntg-context wrote
:
```
\starttexdefinition titleemph #1
\emph{\Word{#1}}
\stoptexdefinition
\starttexdefinition titlequote #1
\quotation{\Word{#1}}
\stoptexdefinition
```
If you want the behavior you've described, you can change \Word to \Words
in the lines above (in publ-imp-sbl.mkvi). I probably will leave the code
of the words that should not be capitalized
in my BiBTeX file as with \word{of} so they will ignore any instructions to
become capitalized.
--Joel
___
If your question is of interest to others as well, please add
Thanks Hans. Something else I've learned.
Best Wishes
Keith
On Sun, 9 Jan 2022, 22:26 Hans Hagen, wrote:
> On 1/9/2022 4:53 PM, Keith McKay via ntg-context wrote:
> > Since "a picture paints a thousand words", I attach a pdf showing the
> > results of
On 1/9/2022 4:53 PM, Keith McKay via ntg-context wrote:
Since "a picture paints a thousand words", I attach a pdf showing the
results of the execution of the code.
Are these bugs or... ?
more interplay between parameters ...
draw lmt_shade [
trace
Since "a picture paints a thousand words", I attach a pdf showing the
results of the execution of the code.
Are these bugs or... ?
Best Wishes
Keith
On 06/01/2022 15:57, Keith McKay wrote:
Hi
In the code below you will see that I have created a closed path and
perform
gt;
>
>
> On Fri, Jan 7, 2022 at 6:25 PM hanneder--- via ntg-context <
> ntg-context@ntg.nl> wrote:
>
>>
>> Probably the situation in South Asian Studies (Indology) is peculiar.
>> As I indicated, there are mostly no budgets for book typesetting in
>&
peculiar.
As I indicated, there are mostly no budgets for book typesetting in
Indology and
I know of no real expert for typesetting in this field. In other
words, the authors
have do it themselves, usually in Word etc., but some do use TeX etc.
Our publications
series (Indo
On 1/7/2022 6:25 PM, hanneder--- via ntg-context wrote:
Probably the situation in South Asian Studies (Indology) is peculiar.
As I indicated, there are mostly no budgets for book typesetting in
Indology and
I know of no real expert for typesetting in this field. In other words,
the authors
xpert for typesetting in this field. In other
> words, the authors
> have do it themselves, usually in Word etc., but some do use TeX etc.
> Our publications
> series (Indologica Marpurgensia) is, for instance, all done with
> LaTeX, as are my publications
> with Harrassowitz, whic
Probably the situation in South Asian Studies (Indology) is peculiar.
As I indicated, there are mostly no budgets for book typesetting in
Indology and
I know of no real expert for typesetting in this field. In other
words, the authors
have do it themselves, usually in Word etc., but some do
\paralleltext{\m{m}}{Masse}\space
\paralleltext{\m{·}}{mal}\space
\paralleltext{\m{c^2}}{Lichtgeschwindigkeit im Quadrat}\space
\stoptext
I guess it could be useful for short educational examples.
If I needed this, I might have tried the steps module.
for single words the 'ruby' might work too
this one
sible path within CTX in order to clarify what was possible
with critical edition, which means : 1) a main text with 2) several
levels of footnotes showing different version of this text (mainly
differences or alterations of the main text, mainly in words
occurences, specially within Medie
ingle
> verse does not even fit the screen. For
> editing and selecting the variants one has to produce a formatted pdf version.
>
> -
>
> Another disadvantage of the edmac style approach is that it expects European
> languages. Scripts are
> no more the main problem,
t; is a ligature. So in giving variants for both
words, we cannot just
separate samyag and gomaya, for then the ligature gg is not printed
correctly. We also want to
quote the correct word samyag in the apparatus (which is in roman!).
Now, to make things more
complicated the xml text should
The aCharTex generated by a shell script contains the 300 plus
> > commands with the structure
> >
> > \tfa{ \bold{variable} \par (12 - 0.13) } \blank
> >
> > The problems are that - a) the group, with the two short paras get
> > separated into different columns and som
cribió:
Am typesetting a book that has 'STUDY ONE', 'STUDY TWO' etc. in
place of
CHAPTER ONE, etc. I have no difficulty getting 'STUDY' with
\setuplabeltext[chapter=STUDY~], and I can achieve 'STUDY One'
with the
key conversion=Words in \setuphead[chapter].
My question: how do
STUDY One' with the
> key conversion=Words in \setuphead[chapter].
>
> My question: how do I get ONE instead of One.
>
> I tried conversion=WORDS (or WORD), but that is not recognised and the
> output then becomes STUDY 1, which I don't want.
>
> I also tried playing with \definest
Am typesetting a book that has 'STUDY ONE', 'STUDY TWO' etc. in place of
CHAPTER ONE, etc. I have no difficulty getting 'STUDY' with
\setuplabeltext[chapter=STUDY~], and I can achieve 'STUDY One' with the
key conversion=Words in \setuphead[chapter].
My question: how do I get ONE instead
get
separated into different columns and sometimes pages. How can I prevent
that? In other words, how do I tell Context that this should be treated
as cohesive unit? b) The horizontal space doesn't align across columns.
This probably is because the different conjuncts in each of these lines
have
rfbuzz (which is what Xe(La)TeX uses
> by default) uses some heuristics to work out these conjuncts (?!).
>
> To answer your specific question regarding the conjuncts in the given
> words you have to use some Unicode hacking to get what you want in
> ConTeXt.
>
> In each of the follow
e issues might be due to
differences in implementation. [Not entirely sure since I am a novice]. My
guess is that Harfbuzz (which is what Xe(La)TeX uses by default) uses some
heuristics to work out these conjuncts (?!).
To answer your specific question regarding the conjuncts in the given words
On Friday, December 31, 2021 6:25:01 PM IST Ajith R via ntg-context wrote:
> The problems are that - a) the group, with the two short paras get
> separated into different columns and sometimes pages. How can I prevent
> that? In other words, how do I tell Context that this should b
. How can I prevent
that? In other words, how do I tell Context that this should be treated
as cohesive unit? b) The horizontal space doesn't align across columns.
This probably is because the different conjuncts in each of these lines
have different heights. How can I ask Context to treat each
it among those things that google grabs - and at some point possible
discards)
The actual problem was with the italic ligatures "a_s", "e_s", "é_s",
"i_s" and "u_s", that should occur only at the end of words and not
everywhere. Instead of rem
be best to help Google improving this font, but I don’t
have time yet.
The actual problem was with the italic ligatures "a_s", "e_s", "é_s", "i_s" and
"u_s", that should occur
only at the end of words and not everywhere. Instead of removing the unwa
lookups = { 1 }
}
}
}
}
A better an dmore reliable approach is this:
\startluacode
local demo = {
name= "demo",
options = {
{
patterns = {
fio = "f|io",
etallimit=10,
> etaldisplay=\btxparameter\c!etallimit,
> journalconversion=\v!normal,
> monthconversion=\v!month,
> title=yes,
> separator:names:2={\btxcomma},
> separator:names:3={\btxcomma\btxlabeltext{and}\space},
> separator:names:4={\btxspace\btxlabel
etupbtxlist[chicagonum]
\definebtxrendering[chicagonum]
[specification=chicagonum,
sorttype=authoryear,
numbering=no]
\startsetups btx:chicagonum:list:book
\btxdoif{author}{
\btxflush{author}
\btxperiod
}
\btxdoif{title}{
{\it\Words \btxflush{
]
\setupbodyfont[dejavu]
\defineregister
[Russian]
[n=1,
command=\Words,
pagenumber=no,
language=ru,
textalternative=horizontal,
distance=0pt]
\setupregister [Russian] [2] [textstyle=bold,left={, }]
\setupregister [Russian] [3] [textstyle=italic,left={, }]
% word category meaning
age[es]
\enableexperiments[fonts.compact]
\setupbodyfont[dejavu] % computer-modern-unicode]
\setuphead[chapter]
[alternative=middle]
\defineregister[Russian]
\setupregister[Russian]
[expansion=yes,
balance=no,
n=2,
command=\Words,
pagenumber=no,
language=ru]
\def\Ruso[#1]%
{\begi
,
\c!balance=\v!no,
\c!n=2,
\c!command=\Words,
\c!pagenumber=\v!no,
\c!language=\s!ru]
\def\Ruso[#1]%
{\begingroup
\getdummyparameters[word=,category=,meaning=,#1]%
\Russian[\dummyparameter{word}]%
{\bold{\dummyparameter{word}}
\italic{\dummyparameter{category
:
\unprotect
\mainlanguage[\s!es]
\enableexperiments[fonts.compact]
\setupbodyfont[computer-modern-unicode]
\setuphead[chapter]
[\c!alternative=\v!middle]
\defineregister[Russian]
\setupregister[Russian]
[\c!expansion=\v!yes,
\c!balance=\v!no,
\c!n=2,
\c!command=\Words,
\c!pagenumber
Last time I was at the ConTeXt conference, a few years back, there was talk
about "monetization" of ConTeXt. In other words, using ConTeXt to generate
revenue. I was wondering if this is still on the agenda. Perhaps we could
have a discussion on how to achieve this. I have one pos
o it.
>>>
>>> Regards
>>>
>>> Marcus Vinicius
>>>
>>> On Tue, Oct 26, 2021 at 1:22 AM Wolfgang Schuster <
>>> wolfgang.schuster.li...@gmail.com> wrote:
>>>
>>>> Marcus Vinicius Mesquita via n
Tue, Oct 26, 2021 at 1:22 AM Wolfgang Schuster <
>> wolfgang.schuster.li...@gmail.com> wrote:
>>
>>> Marcus Vinicius Mesquita via ntg-context schrieb am 26.10.2021 um 00:38:
>>> > Dear list,
>>> >
>>> > Is there anything in ConTeXt that
101 - 200 of 2268 matches
Mail list logo