diff --git a/README.markdown b/README.markdown
index 98c2ca3..b0a4732 100644
--- a/README.markdown
+++ b/README.markdown
@@ -26,7 +26,7 @@
 expected to be a certain, predicted thing, and if it's not that thing, the
 implementation is probably considered incorrect.
 
-So why not write those examples in a format that can be tested?
+So why not write those examples in a format that can be run and tested?
 
 You could write a bunch of standalone test sources, and store the output you
 expect from them in a bunch of other files, and write a shell script that runs
@@ -37,27 +37,29 @@
 comment syntax of your programming language (if your programming language
 supports comments) and is also detached from all the other test descriptions.
 
-You could write doctests, but if your language isn't implemented in Python
-it's awkward, and there can be awkward quoting issues with how you embed your
-test sources inside that big Python string.
-
 You could write unit tests in the unit test framework of your choice, but
 if your programming language has more than one implementation one day (and
 you should really consider that possibility) then you might not be able to
 re-use it so easily for other implementations in other languages.
 
+In a language like Python, you could write doctests, but that also ties your
+tests to one implementation of your language.  There can be awkward
+quoting issues with how you embed your test sources inside those embedded
+strings that comprise your doctests, as well.
+
 Or... you could write a Markdown document with beautiful yet precise prose
 describing your wonderful language, alternating with example code (in the
 form of embedded Falderal tests) clarifying each of the points you are
-making; then you could use a Falderal-speaking tool to run each of these tests
-against any implementation of your language which exists or will exist in
-the future.
+making; then you could use a Falderal-comprehending tool to run each of these
+tests against any implementation of your language which exists or will exist
+in the future.
 
 *And* you could even write this document *before* you even start implementing
 your language; then when it is all clear "on paper", you have a target at
 which you can aim while writing your language.  As you implement more and more
-of it, more and more tests in your test suite will pass.  This is the essence
-of Test-Driven Language Design (TDLD).
+of it, more and more tests in your test suite will pass.  This is simply the
+idea behind Test-Driven Development (TDD) applied to language design, which we
+will call Test-Driven Language Design (TDLD).
 
 Features of the Format
 ----------------------
@@ -69,6 +71,9 @@
 *   Run tests from one or more documents.
 *   Report the results, with some given level of detail.
 
+There is, of course, a reference implementation which does both of these
+things.  It is called py-falderal and it is written in Python 2.7.
+
 Each Falderal test is for some abstract _functionality_, and each
 functionality may have multiple concrete _implementations_.  Thus the same
 tests can be run multiple times, once for each implementation of the
@@ -76,9 +81,9 @@
 
 Directives in the Falderal document may assign functionalities to tests,
 and may define implementations for given functionalities.  Implementations
-may be defined outside of any document, as well.  Falderal defines one kind
-of implementation, implementation by Bourne shell command, but is not
-inherently restricted from supporting other kinds of implementations.
+may be defined outside of any document, as well.  Falderal defines one
+general kind of implementation, implementation by Bourne shell command, but
+is not inherently restricted from supporting other kinds of implementations.
 
 Inherent Limitations
 --------------------
@@ -99,7 +104,8 @@
 This distribution contains:
 
 *   `doc` — contains documents about Falderal.  For the specification of
-    the file format, see `doc/Falderal_Literate_Test_Format.markdown`.
+    the file format, see
+    [`doc/Falderal_Literate_Test_Format.markdown`](doc/Falderal_Literate_Test_Format.markdown).
     (Note that this specification should not be expected to remain stable
     through the 0.x version series.)  There are other documents in there too.
 *   `bin/falderal` — the reference implementation of Falderal, written in
@@ -107,6 +113,7 @@
     sources in `src/falderal`.  You don't need to install it; just add
     the `bin` directory of this distribution to your `$PATH`.  This
     implementation is (somewhat) documented in `doc/py-falderal.markdown`.
+*   `script` — miscellaneous small tools intended to be used in tests.
 *   `src` — source code for py-falderal.
 *   `tests` — a set of tests for Falderal itself.  (Note that these are not
     written in Falderal, as that would just be too confusing.)
@@ -128,18 +135,24 @@
 Projects using Falderal
 -----------------------
 
-(NOTE Actually, I'm sure this information can be extracted from Chrysoberyl
-somehow, so in the future, just link to that here.)
-
-Exanoke, Flobnar, Hev, Iphigeneia, Madison, Pail, Pixley, PL-{GOTO}.NET, Robin,
-Quylthulg, Velo, and Xoomonk.
+*   [Exanoke](http://catseye.tc/node/Exanoke)
+*   [Flobnar](http://catseye.tc/node/Flobnar)
+*   [Hev](http://catseye.tc/node/Hev)
+*   [Iphigeneia](http://catseye.tc/node/Iphigeneia)
+*   [Madison](http://catseye.tc/node/Madison)
+*   [Pail](http://catseye.tc/node/Pail)
+*   [Pixley](http://catseye.tc/node/Pixley)
+*   [PL-{GOTO}.NET](http://catseye.tc/node/PL-{GOTO}.NET)
+*   [Robin](http://catseye.tc/node/Robin)
+*   [Quylthulg](http://catseye.tc/node/Quylthulg)
+*   [Velo](http://catseye.tc/node/Velo)
+*   [Yolk](http://catseye.tc/node/Yolk)
+*   [Xoomonk](http://catseye.tc/node/Xoomonk)
 
 Xoomonk, Madison, Velo, and Exanoke are good examples of how a literate
 test suite can be useful in both describing a programming language through
 examples and testing that an implementation of the language does not violate
-the language specification.
-
-Xoomonk, Madison, Velo, and Exanoke are, in fact, exercises in Test-Driven
+the language specification.  They are, in fact, exercises in Test-Driven
 Language Design (TDLD), where the tests were written as part of designing the
 language, before any attempt at implementation; the others are more like
 traditional test suites, written after-the-fact.