ture)
I think this is great, since this allows it to be applied as allowed by
the PDF spec.
Also for the expansion of abbreviations, /E is provided for marked
sections
(https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandards/PDF32000_2008.pdf#nameddest=G13.2260327),
the same way as
Felix schrieb am 16.09.2024 um 16:24:
[...]
This code shows how I am getting the default spacing between the words from
itemize, description, and the spacing between the roman numeral and the section
title. I tried figuring out how to get the spacing for all of these to be set
to a single
pitem
\stopitemgroup
\startdescription{I'm not a fan of the}
default spacing
\stopdescription
\startsection[title={I'm not a fan of the}]
default spacing
\stopsection
\stopdocument
This code shows how I am getting the default spacing between the words from
itemize, description,
xed width TeX can stretch or shrink the space between the word to
align them with the right margin.
The best solution is to disable justification and use ragged text on the
right side. An alternative is to allow bigger spaces between words but
as you can see in the following example the gap
between me entering (just) flushleft,
normal, flushright.
Have those options become obsolete? how am I supposed to know?
Whether there is a difference or not depends on your text, when you have
a wide
text block and short words TeX can produce good results without the need for
additional
https://mailman.ntg.nl/archives/list/ntg-context@ntg.nl/thread/NZOIVLVPZHHM77Y6CD5LDDCJV3DXEPGM/
>> in Hans' response of 27 Aug 1:07 a.m., it has "There are three ways"
>> attributed to cont...@0vladimirgrbic.com, whereas in the email I received
>> those four words (appear
read/NZOIVLVPZHHM77Y6CD5LDDCJV3DXEPGM/
> in Hans' response of 27 Aug 1:07 a.m., it has "There are three ways"
> attributed to cont...@0vladimirgrbic.com, whereas in the email I received
> those four words (appear to) come from Hans.
>
> Is this something that someone c
has "There are three ways"
attributed to cont...@0vladimirgrbic.com, whereas in the email I received
those four words (appear to) come from Hans.
Is this something that someone can (and cares to) fix?
Cheers.
.
I’m wondering if there is a short-cut means to set this up, such that if
I can tell the document that any time it sees “cliff-dwellings” in an
index, it will also add its page numbers to under
“dwellings+cliff-dwelling”? In other words, is there an easier way to
add these categories, then to go
time it sees “cliff-dwellings” in an index,it will
also add its page numbers to under “dwellings+cliff-dwelling”? In other words,
is there an easierway to add these categories, then to go through the entire
documentand mark them? I already have a list of all the words.
For instance, in my above
[align={inner,nothyphenated}]
> > Thanks, that works for the alignment. It does not change the hyphenation.
> > I would have expected this line breaks in the margin note:
> >
> > significant
> > incredible
> > components
> >
> > Instead, ConTeXt gene
-
system >
system > current input type: initial
system >
system > approximate memory: 52203104 (49 MB)
system >
system > expansion depth
current input type: initial
system >
system > approximate memory: 52203104 (49 MB)
system >
system > expansion depth : min: 1, max: 100, set:
1, top: 6
system >
system > luabytecode registers : 1020
sy
edit the Text in such a more "TeXish" result
> manually, e.g. insert small spaces as in ‘St.\,Martin’, explain unknown
> words in \footnote{comment}, etc.
Different spaces are Unicode characters. As entities, &nnbsp; or
(narrow no-breaking space and thin space, respectively).
&
section[title=The medieval part of the church]\stopsection
\stoptext
I would then like to edit the Text in such a more "TeXish" result
manually, e.g. insert small spaces as in ‘St.\,Martin’, explain unknown
words in \footnote{comment}, etc.
I know you could or even should do the tex
e characters where a break is or isn't possible.
As far as I am currently using, the context is not very well set for
localization, and there are many words in lang-txt.lua that have not been
translated into Chinese. I think that as an ordinary user, it is very dangerous
to modify inform
Hong Kong),
and the expressions used (Chinese mainland, Taiwan, Hong Kong) are basically
different between the dialects of Chinese. There is no hyphenation difference
in any dialect.
As far as I am currently using, the context is not very well set for
localization, and there are many words in la
Joel via ntg-context schrieb am 20.05.2024 um 03:30:
I'm using mostly default ConTeXt settings, but an editor has warned I
should avoid using hyphenation at the end of lines--at least for my
particular audience.
I've found manual text that says how to disable specific words f
ttings, but an editor has warned I should
> avoid using hyphenation at the end of lines--at least for my particular
> audience.
>
> I've found manual text that says how to disable specific words from being
> hyphenated.
>
> Is there a whole-docu
I'm using mostly default ConTeXt settings, but an editor has warned I should
avoid using hyphenation at the end of lines--at least for my particular
audience.
I've found manual text that says how to disable specific words from being
hyphenated.
Is there a whole-document switch to
On 5/19/2024 5:40 AM, seyal.zav...@gmail.com wrote:
Hi all,
I want to define a command that simulate hashtag behavior in social networks,
In other words, this command should put the hashtag sign at the beginning of
it, replace spaces with dashes and also it should be a clickable link and
Hi all,
I want to define a command that simulate hashtag behavior in social networks,
In other words, this command should put the hashtag sign at the beginning of
it, replace spaces with dashes and also it should be a clickable link and
unbreakable world.
I mean like this, for example
other words, it says WB 1 in front of every page number from the first
workbook.
Is there any way to consolidate the page numbers by book, e.g. output instead
like this:
WHST.1.1: 11, 231, WB1: 124, 133, WB 2: 325, 345
In other words, it list all page numbers for Workbook 1 right after WB 1:
instead
e appearing on the first
>>> > page, which has a different layout structure.
>>> >
>>> > I've pasted a sanitized version of the tex file below. Any assistance
>>> > would be greatly appreciated!
>>> >
>>> > Thanks,
>>
which has a different layout structure.
>> >
>> > I've pasted a sanitized version of the tex file below. Any assistance
>> > would be greatly appreciated!
>> >
>> > Thanks,
>> > Ross Schulman
>> > ------
; >
> > Thanks,
> > Ross Schulman
> > ----
> >
> > [...]
> >
> > Lorem Manager
> > \startitemize[packed]
> >\item \ipsum[alternative=words, n=10, inbetween=\space] \par
> >\item \ipsum[alternative=words
f the tex file below. Any assistance
would be greatly appreciated!
Thanks,
Ross Schulman
[...]
Lorem Manager
\startitemize[packed]
\item \ipsum[alternative=words, n=10, inbetween=\space] \par
\item \ipsum[alternative=words, n=10, inbetween=\space] \par
\item \i
San Francisco, CA (remote)}
Senior Fellow
\startitemize[packed]
\item \ipsum[alternative=words, n=10, inbetween=\space] \par
\item \ipsum[alternative=words, n=10, inbetween=\space] \par
\item \ipsum[alternative=words, n=10, inbetween=\space] \par
\item \ipsum[alternative=words, n=10, inbet
Unfortunately, despite all that has been said, we have to realise what
words actually mean in English, and 'infamous' has a negative
connotation. So I recommend rephrasing this and perhaps the entire
paragraph so that it presents a positive perspective on ConTeXt. But if
you mean &
I used “infamous” as a funny way to say “not famous, but somewhat
known” (and yes, I know Latin and what the words really mean).
That was my understanding. Infamous=Not famous; that is, not as well
known as others. A slight play on words.
--
Joaquín Ataz López
Departamento de Derecho civil
ritique applies to JUH’s quote, though.
I used “infamous” as a funny way to say “not famous, but somewhat known”
(and yes, I know Latin and what the words really mean).
Hraban
___
If your question is of interest to
ing retrieved from the command
> \setuppapersize[B5]
> issued by the user?
>
> Actually I am using your setup for learning Japanese, and at my beginner’s
> level I do separate words I am learning with spaces. While with the previous
> version of your setup the lines were br
5]
> issued by the user?
>
> Actually I am using your setup for learning Japanese, and at my beginner’s
> level I do separate words I am learning with spaces. While with the previous
> version of your setup the lines were breaking womewhat strangely, but now the
> spaces betwee
fined « by hand » there, instead of being retrieved from the command
\setuppapersize[B5]
issued by the user?
Actually I am using your setup for learning Japanese, and at my beginner’s
level I do separate words I am learning with spaces. While with the previous
version of your setup the
specific, giving titles that match the
content inside. I could move the section titles inside the \event macro,
but it means rewriting ~200 other macros.
In other words, how do I define a section title by defining it somewhere
in the content of the \event macro?
\starttext
\startsection
. I could move the section titles inside the \event macro, but it means
rewriting ~200 other macros.
In other words, how do I define a section title by defining it somewhere in the
content of the \event macro?
\starttext
\startsection[\whatistitle] %<-- would display "Neon Tetras&
it would be better to nest commands (\starttext
before \startparagraph).
> [...]
> So this is the example. What I like to do: The first paragraph should be
> normal written, but the second one should have more space between the
> words. Because of Math: Is \hspace the right wa
first paragraph should be normal
written, but the second one should have more space between the words. Because
of Math: Is \hspace the right way? The right command? And using \startnarrower,
\stopnarrower?
Many thanks
Uschi
e terminal!
> Here is how much of TeX's memory you used:
> 16 strings out of 474221
> 403 string characters out of 5750189
> 1922978 words of memory out of 500
> 22371 multiletter control sequences out of 15000+60
> 558069 words of font info for 36 fonts, out of
nd of file on the terminal!
Here is how much of TeX's memory you used:
16 strings out of 474221
403 string characters out of 5750189
1922978 words of memory out of 500
22371 multiletter control sequences out of 15000+60
558069 words of font info for 36 fonts, out of 800 for 9000
1141
tion for how to
become an advanced user was nonexistent, and for some specific cases more
scattered around. Thus, the book was born.
In other words, it's a book that teaches you how to become an advanced ConTeXt
user, certainly far beyond "A not so short introduction to ConTeXt"
ext
> wrote:
>
> On 2/2/2024 1:03 PM, Hans van der Meer via ntg-context wrote:
>> In the 18th century documents I am transcribing often words are abbreviated
>> as for example /voorschreeve/ becoming /voors:/ In the transcription it is
>> usual to write this as /voors[chre
On 2/2/2024 1:03 PM, Hans van der Meer via ntg-context wrote:
In the 18th century documents I am transcribing often words are
abbreviated as for example /voorschreeve/ becoming /voors:/ In the
transcription it is usual to write this as /voors[chreeve]/ indicating
to the reader how the
In the 18th century documents I am transcribing often words are abbreviated as
for example voorschreeve becoming voors: In the transcription it is usual to
write this as voors[chreeve] indicating to the reader how the abbreviation was
interpreted.
The problem then arises with hyphenation
r meant
to present some sort of logical structure. I'm happy for it to be decorative,
in which case fewer labels might be better.
ConTeXt “wraps” the LuaMetaTeX binary, since it handles the input as
well as the output. I wasn’t sure how to show what controls what.
5. Similar to the box st
n now i.e. just
"Context"?
4. It's not clear whether the intersecting boxes are simply decorative or meant
to present some sort of logical structure. I'm happy for it to be decorative,
in which case fewer labels might be better.
5. Similar to the box structure comment is that
rma
Cultivar Epithet = 'Aditya'
The problem I was running into was the labels might have more words than
can be contained within the space allotted
on one line (i.e., Cultivar Epithet would not fit in the space on the same
line, and hence my question on how to force a \crlf
between those two
ote:
> > Dear list,
> >
> > I’m facing some strange spacing with inline maths. Sometimes the space
> > is ok (like around the first two fractions), but sometimes it is way too
> > little. Do I miss to configure something?
> your spacing is little between words too
On 12/3/2023 10:31 PM, Alexandre Christe wrote:
Dear list,
I’m facing some strange spacing with inline maths. Sometimes the space
is ok (like around the first two fractions), but sometimes it is way too
little. Do I miss to configure something?
your spacing is little between words too
> On 20 Nov 2023, at 08:54, Ursula Hermann wrote:
>
> Dear Bruce,
>
> Yes, I need also margintext.
What do you want to appear in the margin: the theorem number or the section
number of the current section?
In other words, what should replace the "hard coded" 2.1
On Thu, 16 Nov 2023 21:40:08 +0100
vm via ntg-context wrote:
> Is there a command in context to force every word in a text to
> hyphenate? e.g to typeset a text with "ge-dach-ten-streep-jes"
Like this?
\starttext
\hyphenatedword{\samplefile{knuth}}
\stoptext
Marco
___
Is there a command in context to force every word in a text to
hyphenate? e.g to typeset a text with "ge-dach-ten-streep-jes"
Thanks
.Floris
___
If your question is of interest to others as well, please add an entry t
produce an MWE as I have no idea what to remove and include - it
doesn't seem to matter which components are left or removed in order to fix the
problem just as long as several ar removed. In other words, it's not one
component file causing the problem. And sometimes commenting out just tw
sulting PDF document?
>
> one has to render a document (page stream) in order to wee what end up
> where .. a viewer has that info so it can decide to act upon it (apart
> from some gambling with words clipped in the middle of a character)
>
> this is not something for context to
ped section is added to the
resulting PDF document?
one has to render a document (page stream) in order to wee what end up
where .. a viewer has that info so it can decide to act upon it (apart
from some gambling with words clipped in the middle of a character)
this is not something for context
\xmlflush{#1}
> \stopchapter
> \stopxmlsetups
> \startxmlsetups xml:p
> \xmlflush{#1}\par
> \stopxmlsetups
> \startluacode
> function xml.functions.getMarking(t)
> _,n = t.dt[1]:gsub("
tups xml:p
\xmlflush{#1}\par
\stopxmlsetups
\startluacode
function xml.functions.getMarking(t)
_,n = t.dt[1]:gsub("%S+","")
if n > 10 then
local words = {}
for word in t.dt
using alternative=d, which I
confess I had entirely overlooked!
Julian
On 5/9/23 08:43, Max Chernoff wrote:
Hi Julian,
I am attempting to get a TOC that looks like the following (in other
words with section titles and their relative page numbers in a block
below the chapter):
Has
anyone put together
Hi Julian,
> I am attempting to get a TOC that looks like the following (in other
> words with section titles and their relative page numbers in a block
> below the chapter):
> Has
> anyone put together a TOC of this kind and might be able to give me a
> hint to follo
jbf schrieb am 04.09.2023 um 08:34:
I am attempting to get a TOC that looks like the following (in other
words with section titles and their relative page numbers in a block
below the chapter):
Chapter 1 ...5
Section 1 5, Section 2 6
I am attempting to get a TOC that looks like the following (in other
words with section titles and their relative page numbers in a block
below the chapter):
Chapter 1 ...5
Section 1 5, Section 2 6, Section 3 7,
Section 4 8, Section 5 9
Thomas A. Schmitz schrieb am 19.08.2023 um 18:10:
On 8/19/23 17:51, Wolfgang Schuster wrote:
You can use the font method to have hyphenated words back.
The culprit which prevents hyphenation are the penalty settings
which are added by ConTeXt when the default method is used.
\mainlanguage [de
On 8/19/23 17:51, Wolfgang Schuster wrote:
You can use the font method to have hyphenated words back.
The culprit which prevents hyphenation are the penalty settings
which are added by ConTeXt when the default method is used.
\mainlanguage [de]
\setuppapersize [A6]
\setupquotation[method
have hyphenated words back.
The culprit which prevents hyphenation are the penalty settings
which are added by ConTeXt when the default method is used.
\mainlanguage [de]
\setuppapersize [A6]
\setupquotation[method=text]
\starttext
„Originalgenie“ „Originalgenie“ „Originalgenie“
\quot
rts_with(str, start)
return str:sub(1, #start) == start
end
-- get relevant lines
function getHyphenationLines(lines)
local lines_with_hyphenations = {}
for k,v in pairs(lines) do
if
(starts_with(v, "hyphenated")
and not string.find(
relevant lines
function getHyphenationLines(lines)
local lines_with_hyphenations = {}
for k,v in pairs(lines) do
if
(starts_with(v, "hyphenated")
and not string.find(v, "start hyphenated words")
and not string.find(v, "stop
This is perfect, as it works also with mupdf-gl and firefox!
Thank you, Hans
Kind regards
Marcus Vinicius
On Mon, Aug 7, 2023 at 4:58 PM Hans Hagen wrote:
>
> On 8/7/2023 8:58 PM, Marcus Vinicius Mesquita wrote:
> > @ Ulrike: This is what my client wants, and the client is always right.
> You
On 8/7/2023 8:58 PM, Marcus Vinicius Mesquita wrote:
@ Ulrike: This is what my client wants, and the client is always right.
You can try this:
\starttext
\protected\def\ProofOfConcept#1#2%
{{#1\llap{\effect[hidden]{#2
test test \ProofOfConcept{föö}{foo} test
\stoptext
but forget about
@ Ulrike: This is what my client wants, and the client is always right.
Regards
Marcus Vinicius
On Mon, Aug 7, 2023 at 2:23 PM Henning Hraban Ramm wrote:
>
> Am 07.08.23 um 14:17 schrieb Marcus Vinicius Mesquita:
> > Thank you for the answers, Bruce, Pablo and Hraban. I was not aware of
> > Act
Am 07.08.23 um 14:17 schrieb Marcus Vinicius Mesquita:
Thank you for the answers, Bruce, Pablo and Hraban. I was not aware of
ActualText.
I work on a manjaro linux, and I tested the example Pablo sent on
several programs:
mupdf-gl or mupdf: fails! [mupdf-gl is what I customarily use for its
bla
Am Mon, 7 Aug 2023 09:17:19 -0300 schrieb Marcus Vinicius Mesquita:
> Thank you for the answers, Bruce, Pablo and Hraban. I was not aware of
> ActualText.
>
> But \pdfbackendactualtext is actually just what I needed since it can
> be used also for other things like:
I don't think that it would b
Thank you for the answers, Bruce, Pablo and Hraban. I was not aware of
ActualText.
I work on a manjaro linux, and I tested the example Pablo sent on
several programs:
mupdf-gl or mupdf: fails! [mupdf-gl is what I customarily use for its
blazing speed]
firefox: fails
vivaldi: passes
okular: passes
On 8/5/2023 5:36 PM, Alex Leray wrote:
Hi all,
I'm having another issue with my project. I'm trying to typeset
fragments of HTML, including tabs and (repeating) spaces. I'd like to
have my snippet with some words in bold.
So I used `typing` together with the `escape` option
Am 06.08.23 um 20:37 schrieb Pablo Rodriguez:
Hans provides this jewel in back-imp-pdf.mkxl and back-pdf.mkiv (adapter
for your needs):
\starttext
text \pdfbackendactualtext{whatever you want}{filia} text
\stoptext
In any case, the PDF viewer used to search must have ActualText impl
On 8/5/23 21:16, Marcus Vinicius Mesquita wrote:
> Dear List,
>
> I have a lot of latin words in a document with the length of the
> vowels indicated by diacritics, for example: fīlĭa.
>
> Is it possible somehow to make these words searchable without the diacritics?
> That i
at 20:16, Marcus Vinicius Mesquita
> wrote:
>
> Dear List,
>
> I have a lot of latin words in a document with the length of the
> vowels indicated by diacritics, for example: fīlĭa.
>
> Is it possible somehow to make these words searchable without the diacritics?
> T
Dear List,
I have a lot of latin words in a document with the length of the
vowels indicated by diacritics, for example: fīlĭa.
Is it possible somehow to make these words searchable without the diacritics?
That is, if I make a search for filia in the final pdf file, fīlĭa
would also be found
Hi all,
I'm having another issue with my project. I'm trying to typeset
fragments of HTML, including tabs and (repeating) spaces. I'd like to
have my snippet with some words in bold.
So I used `typing` together with the `escape` option.
But now, I'd like my snippets to
cker ...
\enabletrackers[hyphenation.applied.console]
\enabletrackers[hyphenation.applied.visualize]
you even get a file with the hyphenated words
You can see all of them with
\disabledirectives[backend.cleanup.flatten]
\bitwiseflip \normalizelinemode -\flattendiscretionariesnormalizecode
\showmakeup[discretionary]
Any help would be appreciated.
I suppose you would be disappointed it there was no tracker ...
\enabletrackers[hyphenation.applied.console]
\enabletrackers[hyphenation.applied.visualize]
you even get a file with the hyphenated words
You can see all of them with
\disabledirectives[bac
ItalicFont=LucidaTypewriterOblique,
> > BoldFont=LucidaTypewriterBold,
> > % BoldItalicFont=LucidaTypewriterBoldOblique,
> > ]{LucidaTypewriterRegular}
> >
> > \begin{document}
> >
> > {\rm\input{knuth}}
> >
> > \textsf{\
On 5/18/2023 2:21 PM, Jairo A. del Rio wrote:
Hi, list. There's lang-rep.mkxl and lang-tra.mkxl in the distribution
which allows replacing a list of words and applying transliteration,
respectively. Both are based on tables. I want to know if there's a way
to use a function in
Hi, list. There's lang-rep.mkxl and lang-tra.mkxl in the distribution which
allows replacing a list of words and applying transliteration,
respectively. Both are based on tables. I want to know if there's a way to
use a function instead, say
local function nice(str)
return str ..
> \stopformula
> >
> > \startformula
>
> > \int_{\infty}
>
> > >> \stopformula
>
> >
> > Sorry, but I can't typeset math unless various parameters have been set.
> This is
> > normally done by loading special math fon
all
>
> mkiv lua stats > loaded fonts: 2 files: latinmodern-math.otf,
> minionpro-regular.otf
>
> And as you can guess, the above is not I was looking forward to
in other words, this is wanted
mkiv lua stats > loaded fonts: 4 files: minionpro-bold.otf, minionpro-it.otf,
I have a document that needs to place some "hidden" data. In other words, there
is an entire \input file with an article, but it doesn't get rendered visibly.
However, the indexes, registers, and bibliography need to still function as if
it were displayed right there on the pag
phg/context-mirror/commits/
> wiki : https://contextgarden.net
> ___
>
--
"We invented a new protocol and called it Kermit, after Kermit the Frog,
star of "The Muppet Show." [3]
[3] Why? Mos
updmap-sys
and the whole remainder of kpathsea but then again this in practice
would not even be needed if at all.
indeed, context doesn't rely on fmtutil etc
In other words, only the TeX Live infrastructure needed which shouldn't
be a problem, right?
But itis: a problem that is.
Besid
st/web2c' from specification
'selfautoparent:/share/texmf-dist/web2c' resolvers |
resolving | looking for fallback 'contextcnf.lua' on given
path '/home/ce/share/texmf/web2c' from specification
'selfautoparent:/share/texmf/web2c' resolvers |
=none,
frame=none,
width=broad,
height=fit,
align=middle]{#1.\\#2}}
\setuphead[section][
conversion=Romannumerals,
style={\bf\kerncharacters[0.075]\WORDS},
align=middle,
command=\MySection,
]
###
Thanks!
Le 15/02/23 à 19:11, Rik Kabel via ntg-context a
On 2/4/23 07:49, Hugo Landau via ntg-context wrote:
> On this page of the wiki there is an example for wrapping long words,
> like long hexadecimal strings:
>
> https://wiki.contextgarden.net/Wrapping
>
> This example is buggy because it deletes one character at the point th
,
conversion=Romannumerals,
style={\bf\WORDS},
align=middle,
command=\MySection,
]
\starttext
\startsection[title={First section}]
this is the text
\stopsection
\stoptext
Have you tried it with \framedtext in the place of \framed ?
--
Rik
,
conversion=Romannumerals,
style={\bf\WORDS},
align=middle,
command=\MySection,
]
\starttext
\startsection[title={First section}]
this is the text
\stopsection
\stoptext
Have you tried it with \framedtext in the place of \framed ?
--
Rik
=broad,
% height=22pt,
align=middle
]{{#1.\\#2}}}
\setuphead[section][
strut=no,
conversion=Romannumerals,
style={\bf\WORDS},
align=middle,
command=\MySection,
]
\starttext
\startsection[title={First section}]
this is the te
19:06), I get:
cing
anack
I’m missing m and b in the hyphenated words.
I’m afraid that the hyphenator is all Greek to me.
Is there any reason why letters are lost in hyphenation?
I’m afraid (I think) I might have hit a bug
On 2/5/2023 12:32 PM, Pablo Rodriguez via ntg-context wrote:
On 2/4/23 10:37, Pablo Rodriguez via ntg-context wrote:
On 2/4/23 07:49, Hugo Landau via ntg-context wrote:
On this page of the wiki there is an example for wrapping long words,
like long hexadecimal strings:
https
On 2/4/23 10:37, Pablo Rodriguez via ntg-context wrote:
> On 2/4/23 07:49, Hugo Landau via ntg-context wrote:
>> On this page of the wiki there is an example for wrapping long words,
>> like long hexadecimal strings:
>>
>> https://wiki.contextgarden.net/Wrapping
>>
On 2/4/23 07:49, Hugo Landau via ntg-context wrote:
> On this page of the wiki there is an example for wrapping long words,
> like long hexadecimal strings:
>
> https://wiki.contextgarden.net/Wrapping
>
> This example is buggy because it deletes one character at the point th
On this page of the wiki there is an example for wrapping long words,
like long hexadecimal strings:
https://wiki.contextgarden.net/Wrapping
This example is buggy because it deletes one character at the point that
it is wrapped.
In the example given, running context turns '257a410' in
Am 29.01.23 um 18:14 schrieb Joel via ntg-context:
I am creating a template for a 1-page newsletter.
At the top of every page is a huge title, below which is a block with a
brief introduction to the newsletter. The body is just 200-300 words
each issue--I edit it such that all of the content
1 - 100 of 1002 matches
Mail list logo