If you have any comments to make about this document, please feel free to contact me at morwen@evilmagic.org.
The Telnet protocol uses 0xff as an escape character, both ways. To send the data-byte 0xff, a client or server MUST send 0xff 0xff. (IAC IAC).
If a program justs want to ignore the telnet stuff, then it's easy to do.
IAC IAC -> data byte 255 IAC WILL/WONT/DO/DONT xxx -> ignore IAC SB xxx ... IAC SE -> ignore IAC other -> ignore
The telnet parser MUST be written as a state machine on a layer on its own - there's no reason why telnet sequences are not allowed to come in the middle of an escape sequence. Do not assume that telnet or ansi sequences are not split across packets.
The client MUST defer sending its own telnet negotiation requests until such time as it knows the server will be able to understand them. This means getting WILL or DO request for an option other than ECHO. (Many Dikus and derivatives are broken such that if the client sends IAC WILL LINEMODE, they fail to filter out the LINEMODE (it being 34, and thus the printable character "), and think it intended as part of the user name. If the server sends IAC sequences to a client that it doesn't know supports it, the worst that can happen is that the client displays crap on the screen.)
The character set used by the Telnet protocol is US-ASCII. Servers MAY use another character set appropriate to the language of the mud (for example, ISO-8859-1, KOI8-R) by default, but there SHOULD be an option to send only ASCII. Obviously this requirement doesn't make sense for languages where transliteration is expensive. Clients MUST be able to deal with the usual encoding for their locale. For example, a mud client aimed at western users MUST be able to deal with ISO-8859-1.
(According to the Telnet spec, only US-ASCII is allowed on the wire. However, there's no useful way of negotiating - RFC 2066's CHARSET option is hardly implemented, and ISO-2022 is useless for most character sets. Further, even binary mode is not useful, as it changes the end-of-line codes from the standard values to platform-specific ones. Therefore, BINARY mode MUST NOT be requested by servers or clients, however clients MAY accept it if offered.)
code | meaning |
ESC[0m | reset all attributes and colours to default |
ESC[1m | bold |
ESC[3m | italic (unreliable) |
ESC[4m | underline (unreliable) |
ESC[22m | bold off (unreliable) |
ESC[23m | italic off (unreliable) |
ESC[24m | underline off (unreliable) |
ESC[30m ... ESC[37m | black fg .. white fg |
ESC[39m | default fg (unreliable) |
ESC[40m ... ESC[47m | black bg .. white fg |
ESC[49m | default fg (unreliable) |
The client MUST default to white-on-black or some other light-on-dark colour scheme where neither the background nor the foreground is one of the 7 other standard colours. If it doesn't, then the server has no way of using colour safely. The client MAY allow the user to customise their colour scheme to be black-on-white or however they like.
ESC[m is defined to be an alias for ESC[0m, but quite a lot of mud clients don't know this. Many even have problems with parsing sequences like ESC[0;1;3;32;45m. These MUST be supported by clients, but SHOULD NOT be sent by servers.
XFree86 xterm (when compiled to) supports a very useful extension to a 256-colour thing, including a 6x6x6 colour cube. To set fg, do ESC[38;5;nm, and bg ESC[48;5;nm. There is no need in a mud server to worry about programming the palette, the mud client should do any reprogramming of the palette to the standard one if possible. 0..15 are the regular colours, once in normal form, one in bright form. 16-231 are a 6x6x6 colour cube. To get the offset, (red * 36) + (green * 6) + blue. Finally, 232-255 is a greyscale. 232 is very dark gray, 255 is very light gray. See xterm's 256colres.h for the exact RGB values.
Any other sequences than ESC[...m are not implemented by most mud clients and SHOULD NOT be used unless known safe through other means. This includes cursor movement.
broken client | nice client | |||||||
user sees... | > |
A "turn local echo off always" MAY be provided, but the newline must still be printed even if this is turned on. Likewise, if the mud has turned echo off, you MUST NOT put anything into the buffer, as the mud supplies its own newline.
When the mud wants the client to start local echoing again, it sends "IAC WONT ECHO" - the client must respond to this with "IAC DONT ECHO".
See RFC 857.
The usual way of doing this is (a) to send IAC GA after a prompt, or (b) for the mud to send IAC WILL TELOPT_EOR, then the client to send IAC DO TELOPT_EOR, and then the mud will send IAC EOR after every prompt. (see RFC 885 for EOR). According to the tinyfugue documents, some muds use "*\b" as a prompt terminator. I have never seen this in the wild.
This is all well and good, except for the fact that some lpmuds, and possibly others, assume some quite bizarre semantics at the client end. They'll assume that the text up to the prompt terminator be spirited away into some other prompt buffer - and any text they send from then on be displayed before the prompt.
sent from mud | expected to be displayed | displayed under 'telnet' |
100/100>[IAC GA] | 100/100> | 100/100> |
You are hungry.\r\n | You are hungry. 100/100> | 100/100>You are hungry. |
This is indeed what tinyfugue does, and a few others. If done carefully, it needn't break muds that are doing it sensibly.
The sensible way to do this, done in Abermuds and perhaps a few others, is to send "100/100>\rYou are hungry.\r\n100/100>". This has the downside that not all the prompt gets wiped if the message is shorter than the prompt. ESC[2k could be used for this if the mud knows it has a client that supports it. This does lead to a nasty race condition, though, so perhaps the magic prompt marking approach does have merits.
The client should not send out a useless generic name like "ansi" or "vt102" (or worse, "linux"). Instead it should send out a name like "zmud", "mushclient", "lyntin". This allows the authors of the mud server to make decisions based on what the mud client is, and what the known capabilities are. It might be desirable to encode version number of the client there in some way, too, but there's no precedent for doing this.
If the server doesn't understand the terminal type, it can request another one, and the client can go through a list. For example, zmud, vt102, ansi, unknown.
The server sends IAC DO TELOPT_NAWS, the client sends IAC WILL TELOPT_NAWS. It then immediately, and whenever the window size changes (even if its just a case of the window staying the same size and the font size changing), sends the server the window size. The server makes no requests. I've seen some clients that agree to do NAWS and then await a IAC SB TELOPT_NAWS SEND IAC SE from the server before actually sending it out. This is wrong wrong wrong.
The format of the NAWS subnegotiation is specified in RFC 1073. To quote -
The syntax for the subnegotiation is: IAC SB NAWS WIDTH[1] WIDTH[0] HEIGHT[1] HEIGHT[0] IAC SE As required by the Telnet protocol, any occurrence of 255 in the subnegotiation must be doubled to distinguish it from the IAC character (which has a value of 255).Not doubling the 255 is a common error that doesn't show up very often, but can lead to the telnet stack getting into an inconsistent state, and possibly ignoring all future input. Also remember, that the width and height are in network byte order (big-endian).
\33]0;
string\a
This could be used by a mud to display status info that isn't appropriate for a prompt,
such as player name and mud name and location within the mud.
It's a metaspec which various packages can be placed on top of. For example, there's a module for sending files to the client for local editing, another for transferring userlists, one for sending timezone information to the server.