Tree @master (Download .tar.gz)
Programming_Language_Feature_Desiderata.markdown @master — view markup · raw · history · blame
Programming Language Feature Desiderata
Draft, very draft
This is a list of programming language features that I like (sometimes contrasted with features I don't like) with the hopeful goal of informing a future language design.
For sake of background: almost all of the code I write in my day job is database- or network-bound; in my industry it's more important to be able to write code quickly, and to understand the code that's been written, than it is for the code itself to be fast. It's also good for the code to be correct -- but requirements change frequently, so the definition of "correct" does too.
Referentially transparent (a.k.a. "purely functional")
Because referentially transparent code is vastly easier to reason about.
A function is referentially transparent if the only effect it has is to produce its return value, and the only things it is affected by are its direct inputs. It is the code equivalent of a mathematical function; its output is literally a function of its inputs.
Comparatively few functional languages actually enforce referential transparency: Haskell is the main one, followed by a handful of more obscure ones (Curry, Futhark, ...)
More usual is for a functional language to allow mutation of shared state, but discourage it to one degree or another (Scheme, ML, Racket, Clojure).
Sometimes this discouragement is so toothless it makes it hard to call the language "functional"; "functional-imperative" would seem to be closer to the truth (Lisp, Julia, F#).
The usual rationale for including shared mutable values in a functional language is that "occasionally you just need them"; the implication is that, in some instances, you just won't be able to achieve acceptable time or space performance without them.
I don't really buy this argument; if I'm pushing the envelope, I'm pushing the envelope, language be damned -- I'll write it in C++ with inline assembly if I have to, to meet the performance requirement. And if I'm not pushing the envelope, I want code that's not awful to reason about.
I don't think it's enough to simply discourage mutation of shared state. Discouragement is only a social measure, a best-practices thing; if you add a feature, you have to assume that someone, somewhere, will use it, and that they'll use it unwisely, to boot. You can only hope that it doesn't get buried somewhere, built on top of, and made virtually impossible to remove.
Forbidding it entirely is not the only alternative, but it is the simplest alternative.
But note that if you forbid mutation of shared state, you probably want to also enforce proper tail-recursion, a la Scheme, for the sake of space efficiency.
But also note that forbidding side-effects doesn't necessarily mean forbidding destructive updates; destructive updates that aren't shared don't violate referential transparency. (see e.g. Haskell's Control.Monad.ST, or uniqueness types in Clean or Futhark). But I'm not sure if I would actually want to allow that in the language. While it would appeal to programmers who (for whatever reason) want to approach a problem imperatively and iterately, it would also complicate writing program analyzers, by adding more language constructs (AST node types) that have to be handled.
An alternative to outright forbidding mutating shared state, would be allowing it in some restricted, controlled manner which does not make the program significantly more difficult to reason about; but this is certainly more difficult to do, as well. This option is too complex for this list; it'd be more of a research project.
Homoiconic
Because homoiconicity is a conceptually coherent way to construct in-language static analyzers (and other "hygenic macros" -- but see below).
You can say that homoiconicity isn't a very well-defined concept and I wouldn't argue with you on that, so if you'll allow me to break down what I mean:
- The language exposes a "quote" operator that takes some chunk of program syntax and evaluates it as a data structure instead of evaluating it with the in-language rules for that syntax.
- The language lets you manipulate that data structure by some means.
- The language exposes an "eval" operator that takes that data structure and interprets it using the in-language rules for the syntax it represents.
Note that this does not mean the language must have a disturbingly regular syntax (like Scheme); Julia's homoiconicity fits this description too.
Note also that that last "interpret" is not to be taken literally: if the implementation can determine that the data passed to "eval" is known statically, it can certainly compile it (or whatever) just as if it were part of the original program. (You could think of this as a kind of constant folding.)
Given the above homoiconicity features, you can build macros. And since the language is referentially transparent, they can't leave bindings in the environment, so they're hygenic. (I think? Need to brush up on exactly what they mean when they say "hygenic".)
However, I should note that macros themselves are not something I'm paritcularly keen on. In fact, overuse of macros is a bad thing, and it would be great if there could be some measures included to counter that. But those could only be on a social level, e.g. in an official style guide.
(Unless, perhaps, "eval" can only take data structures you haven't changed. This would allow static analyzers, but rule out actual macros. Not sure what I think of this possibility yet.)
Latently typed (a.k.a. "dynamically typed")
Because, while static analysis is a good thing, tying a language to a single static type system is restrictive.
For any given language, there may be multiple static type systems that make sense for that language.
Latent types store some type information at runtime. But even static type systems have to store some type information at runtime (to handle "discriminated unions" or "sum types").
Given homoiconicity (above), static checkers can be written as macros. You could import a set of macros to enforce one static type system, swap them out for a different static type system, or forego static typing entirely, at your option.
When checking types (or other properties) statically with macros, flagging violations as errors early on and refusing to proceed is straightforward; but the case where the checking is successful, and you want to communicate that information to the compiler (or whatever) for optimization purposes, is less straightfoward. It's less clear how to collect, store, and communicate that information. This is less of a concern for me, and again, more of a research-project-level thing.
(Ideally) Evaluation Order-Agnostic
Because programmers should have better things to think about than evaluation order.
(For example, execution order, which is a different thing.)
In the absence of side-effects, if a function always terminates, then (computationally speaking) it doesn't matter what order it was evaluated in -- you'll always get the same answer. (But perhaps with different space/time complexity.)
But it's probably not practical to insist that all functions are written in a way where it's determinable that they always terminate.
We have a few options, though:
- assume functions always terminate ("fast-and-loose reasoning")
- if a function can be shown to terminate, then the implementation is allowed to pick an evaluation strategy; otherwise, a particular fixed evaluation order is assumed.
For the fixed evaluation order I would lean towards eager evaluation, because, while lazy evaluation lends itself to some useful tricks, and is perhaps "mathematically nicer" in some ways, the number of times lazy evaluation has helped me say what I want to say is small, while the number of times it has bitten me is large.
Either way, when the implementation does conclude (or assume) the function always terminates, we allow the implementation to choose the evaluation order it deems best, for that function.
I actually think this is far more complex than it's worth. However, I still think it's valuable as an abstract ideal: the programmer really shouldn't be thinking too hard about evaluation order, and if they find themselves thinking hard about evaluation order, something's wrong.
Often when I'm writing Haskell code, I don't think about evaluation order. I'm not even really aware it's lazy evaluation, and if I come across a case when I do have to change gears and think "OK, this part should be strict now for $reasons", it annoys me.
And when I'm writing Scheme code, I don't really think about evaluation order either. All I do think about, is that I want to carry temporary results in an accumulator, instead of just making a recursive call in non-tail-position.
Well-Specified
Because in order to show that a program is correct, you have to have a clear definition of the semantics of the language it's written in.
Some languages don't have a specification (Perl, Rust, Julia).
A reference implementation makes a poor specification document, because it contains all kinds of implementation details that are (we assume!) not relevant to the definition of the language proper.
Even a specification document makes a poor specification document.
The problem is that there could always be an error in the document, so how do you know what you're following is correct?
Coding theory tells us that one of the simplest error-correcting codes is simply to repeat your message three times, and the majority wins.
We can apply that idea here and have three specification documents.
Different languages for writing specifications have different tradeoffs. So each of these documents can be written in a different specification language. A possible mix would be:
- English (easy to read but often ambiguous)
- Haskell ("executable denotational semantics")
- Maude ("executable algebraic semantics")
One could of course use plain old denotational semantics and/or algebraic semantics, but experience shows that being able to "run" the definitions is very useful.
Experience also shows that having a large body of test cases, while not entirely sufficient as a specification, is also very useful; it makes it easier to check if (and where) an implementation deviates from the language description. And even more useful still for these to be presented as example programs in a "literate" document (see Falderal.)
Event-oriented
Because much, if not most, of the software we write nowadays is reactive, not batch-oriented.
Batch-oriented tools still exist, of course, but a lot of users will never see one.
But they are still, in some sense, the default, because of the ways our languages
are structured conceptually ("when the program is started from the command-line it
executes the main()
function which is passed the command-line arguments", etc.)
In the language described by this list of features, there is also the following reason: we (a) don't have side-effects and (b) don't want the programmer to have to think about evaluation order.
The programmer still needs to think about execution order sometimes ("call this API, write this to the database"), but this should not be piggybacked onto evaluation order.
Even using an IO monad for this seems a bit backwards: encode a particular evaluation order so that it matches the execution order we expect? Ideally these are two seperate concepts that should be kept as far apart as possible.
So, how do we address this?
Even though we do not want the programmer to have to think about order of evaluation, we do need the programmer to think about the order of events in the lifecycle of the program. (This is what I'm referring to by "execution order".)
One of the better tools for thinking about the order of events -- in particular the effect that any given ordering of events will have on the state of the system -- is a state machine.
So, we allow the programmer to write their program in the form of a state machine (or something with the general character of a state machine).
The usual way for writing an event-oriented program is to provide a set of functions where each function is an event handler.
To add "state machine" on top of this, each of these functions takes a state, and returns a new state.
Or we could have a single function which takes a state and a data structure representing the event, and returns a new state.
In fact this is similar to a "reducer" popular in JavaScript programming.
If we allow the function to return, along with the new state, a list of data structures representing "commands" (effects to enact after handling the event), we get "The Elm Architecture". Which is generally a really good pattern for reactive programming of user interfaces and so forth.
Accomodating arbitrary strings
Because quoting is a pain, and in the worst cases, leads to things like SQL injection.
Some languages make overtures at allowing arbitrary string literals --
Python's """
and Lua's [[
-- but ultimately they can't actually
contain actually arbitrary text. To do that, you need a "heredoc"-like
construct. More languages should have one of those. This language
should have one of those.
Along with this: identifiers should likewise be able to be arbitrary
strings. A simple way to do this would be to allow a form of identifier
like (for example) $
followed by a string literal (which may be of
the "heredoc" variety described above).
Other features
There are definitely a lot of features this hasn't covered at all (what does the syntax look like? what are the basic types in this latent type system? etc) because I've wanted to restrict my attention to the most important and overarching design choices.