Functional programming and the use of artificial neural networks are two approaches to computation that might seem unrelated at first, but can be seen as two opposite solutions to the same problem.
Functional programming
Software engineers praise purely functional software because it's easy to read: you can
understand a pure function by just reading it, without any need to worry about the context in which this
function was declared, without having to worry about its environment. You don't need to have
three other files open to understand what a pure function f
is doing, because you can be
sure that f
doesn't use global variables (there are no global variables in functional programming,
only global constants) and you can be sure that all functions that f
depends on are pure.
Being easy to read, purely functional programs are also easier to parallelize than their imperative
counterparts, and they are easier to optimize: you can cache the output of f(3, 4)
for all
future invocations of f
that use those exact same inputs; f
is
deterministic, so it's going to give the same output anyway.
Lastly, a purely functional program is easy to unit-test: all its units are pure functions, and as such can be tested without having to setup a certain state before calling the function under test.
Neural networks
The weights of an artificial neural network (ANN) can be seen as its state.
While functional programming is stateless, ANNs are dependent on state: here, in a certain sense, computation is state.
ML Engineers often have to spend some effort into understanding what's happening in a neural network and why it is giving a certain output instead of another. This is partly because ANNs are trained in a non-deterministic way, but also because ANNs depend so heavily on state: two networks with the same exact architecture but different state (different weights) can perform vastly different tasks. A three-layer feed-forward neural network can be trained to steer a small robot or to distinguish cats from dogs.
So at least from an engineering point of view, a neural network is a black box. It is more meant to be used than to be understood. (Of course the engineer's point of view is different from the researcher's point of view—as much as the intentions of a Haskell programmer are different from a person who *maintains* Haskell and its compilers.)
We were saying that neural networks are black boxes. Engineers praise these black boxes because they basically write algorithms for you: if you want to calculate something for which you have tons of examples but no clear algorithm in mind, then just give these examples to a suitable ANN and store its state somewhere. This state (the weights of the network after it has been trained on the examples) is the computation needed to calculate what you want.
So ANNs are the opposite of functional programming and yet they solve the same problem, the problem of computation: calculating an output, given some inputs.
Read-write
I would be tempted to say that an ANN is easier to write than to read, while a purely functional programming language is easier to read than to write. I'm not sure about this last statement (which is also not measurable anyway), but I'm gonna leave it here with the hope that someone can give me their thoughts on this.
Summary
To sum it up:
- The problem of calculating an output given some inputs is known as the problem of computation.
- The problem of computation can be solved with different models of computation.
- Modern computer programs are better represented by purely functional programs than by ANNs. Functional programming prescribes to first come up with a set of stateless computations, then apply the inputs to those computations to get an output.
- On the other hand, the approach of artificial neural networks suggests to first apply the inputs to a set of stateful computations, then adjust the state of the network so that the behavior of the network looks closer to a certain set of examples.
- There is some kind of symmetry in these two approaches.