Start kterm using or
(for canna) or for wnn. If your .Xdefaults is set up
correctly, pressing Ctrl-right shift should bring up a small box with
the hiragana character for 'a'. Typing something in Romaji such as "watashi"
should automatically convert into the hiragana. Pressing the space bar should
convert it into its corresponding Kanji. If you press space again, a small menu
will appear allowing you to select other Kanji or katakana with the arrow
keys. Pressing Shift-space will toggle back into ASCII mode.
If the kterm freezes instead of allowing you to type, this means either your
.kinput2rc file is pointing to the wnn jserver instead of cannaserver or vice
versa, or else your server is not running.
With this setup, pico, vi, and cat should work. If you only want to edit
a few documents, this may be sufficient.
There are 3 main versions of emacs (emacs, xemacs, and mule), as well as
multiple sub-versions of each, all
compiled with different options. Thus, there is no way to know whether any
particular emacs will work in Japanese. On my system, regular emacs only
leaves a square in Japanese mode instead of displaying a character, regardless
of what input mode is set. In xemacs, which is the nicest-appearing version,
typing c-\ (i.e., ctrl-backslash) sets the input mode (but only when xemacs
is started from a kterm with the environment variables set). Type japan? for
a list. For example, if 'japanese-egg-sj3' is entered, it should say (in
English) "Loading its kana..loading its zenkaku...done". You can then type
'watashi' as before, and it should automatically change to hiragana. However,
typing the space bar will print the message:
EGG: Network service (sj3) ga mitsukarimasen |
or "sj3 network service not found".
Although this message is printed in Japanese, with Kanji characters,
this message means that emacs is not going to work in Japanese with your
setup. The other two input methods (japanese-skk and japanese-skk-auto-fill)
did not seem to work at all.
In mule, typing c-\ should immediately put you in hiragana mode. The
characters are displayed between vertical bars called a "fence". Typing
the space bar in this case prints a more interesting error message:
Saaba to setzuoku dekimasen deshita |
which means "server connection could not", i.e., could not connect to
the server.
If it says
Kana kanji henkan saaba tsuushin dekimasen |
i.e., "kana kanji conversion server communicate can't", this means
either your .kinput2rc file is not set up correctly or the server isn't
running.
Sometimes it also says,
Hosuto localhostno wnn wo kidoshimashita.
Hindo fairu "usr/root/kihon.h" ga naiyo. Tsukuru? (y or n) |
"Host localhost's wnn was started. Frequency file
"usr/root/kihon ['fundamental'] .h" is not [present]. Create?"
If one types 'y' (as root), it says,
Fairu ga sakusei dekimasen |
"Can't create file".
On the other hand, if one types 'n', it says,
"File doesn't exist".
Obviously, something fundamentally important with regard to wnn is happening
here. Unfortunately, I have no clue what it might be.
These are the only error messages I could get emacs to produce.
However, I am sure there must be many more.
As mentioned above, it is likely that some files in the sj3 directory
are necessary to get xemacs and mule to convert text to Kanji. Alternatively,
it might be necessary to recompile mule with --canna or --wnn options.
However, attempts to get mule-2.3 to compile were unsuccessful (numerous
function parameter mismatches, object files not being found, etc). It
appears that this program was designed for an extremely old version
of linux.
It is tricky to get 'kinput2' to insert the correct Kanji
character, as it uses a very non-standard Romanization. For instance,
in order to type the word 'senkoo' (specialization),
it is necessary to separately type and convert 'sen' and 'kou'. To get 'chome'
you must type 'choume'. Moreover, pico frequently gets confused with
double byte characters, making a total mess of a line. These corrupted
lines can be tricky to get rid of, as they contain control characters
such as form feed, backspace, etc.
And of course, there is no easy way to print the result. Thus, there is
still a dire need for a better way of typing Asian languages in Linux.
One program, mtscript, looked promising, but doesn't work for Asian
languages (supposedly it can handle Arabic and several Western
languages simultaneously). However, the source is not available. Attempting
to run the precompiled version of mtscript resulted in the following:
$ mtscript
mtscript: can't find library 'libX11.so.6'
$ strings /etc/ld.so.cache | grep libX11.so.6
libX11.so.6
/usr/X11R6/lib/libX11.so.6
libX11.so.6
/usr/i486-linux-libc5/lib/libX11.so.6
$ ldd mtscript
not a dynamic executable
$ file mtscript
mtscript: Linux/i386 demand-paged executable (QMAGIC) |
Qmagic is an obsolete executable format which doesn't run on modern
Linux systems. The latest version on their website is from 1996, so
it would appear that development of mtscript has been abandoned.
Solution: Download NJSTAR Japanese and Chinese word processors
and run them on a (gag) Windows machine.