Warning: this post may contain as much rhetoric as it does rationale. Apply intelligence.
1) Algorithms cannot operate without human expertise
We’ve already suggested that the No Free Lunch theorem does not impose a practical limitation on what learning/optimization algorithms can achieve, and that in the space of interesting problems there is enormous scope for improvement yet. An argument that we didn’t completely dispense with in the process, is that algorithms nonetheless require guidance from a human expert in order to perform well on domain-specific tasks, i.e. even in the already-restricted domain of interesting problems.
Fortunately for those of us enamored with the challenge of algorithm development, the argument is so embarrassingly self-defeating that we scarcely ought to have to give it the time of day. But let’s give it 30 seconds…
The key observation to be made here is that the human expert in question is, of course, a very nice example of a fairly general purpose problem solving machine. And at the risk of belittling said expert further, evolution – which devised the biological brain and imbued it with much of its hard-wired domain expertise – can itself be considered a general purpose optimization algorithm.
Of course then, what the opponents of general purpose algorithms are really trying to say is…
“Hey guys, meta learning (using an algorithm to improve some other algorithm, or indeed itself) and ensembles (having individual algorithms work together to solve problems which none can solve alone) seem to be really cool lines of research”.
They just stuffed up the words is all. (Incidentally, the result of following both of these suggestions would be an algorithm, nothing more, nothing less).
Given this fresh new interpretation of the argument, Algorithm1 would like to be one of the first to say…
“Thanks – those are some really neat ideas!” … “Say, have you thought about getting involved in algorithm research? I can recommend a nice blog.”
Marginally coherent banter aside, if we actually look at the (admittedly, presently anecdotal) evidence, human “guidance” would seem as likely to lead a good algorithm astray as to inform it. Article 1 m’lud: this video presentation by Kaggle President and Lead Scientist Jeremy Howard.
2) Human intelligence is the benchmark against which all algorithms will be measured
Speaking of evoluton and the biological brains that it has spawned. With a few exceptions, evolution has not given animals wheels, for at least one very good reason: it is very difficult to come up with a biological wheel; the requisite cabling and protective membranes would get in a terrible tangle. With respect to human-devised locomotion solutions on the other hand, the wheel is considered to be essential study for nursery-school-level engineering students.
And so it is with the algorithms spawned by human intelligence.
Although the human brain (and the biological brain more generally) is an eminently impressive quite-general-purpose problem solving machine, it was devised by trial and error under somewhat arbitrary restrictions to solve a specific family of interesting problems conducive to survival and procreation (er, somehow giving us the wheel).
Now consider that the basis of most chess playing computers – a collection of incredibly fast serial searches of the game space – is nothing like what goes on in the human brain. Serial processing is just not something the hugely parallel human brain is configured to do. Now the former is certainly not the better design for every problem – and in many settings both approaches may even lead to the same abstract solutions – but it is interesting that for doing arithmetic, deductive reasoning, and playing chess it turns out that humble twentieth-century silicon rules supreme.
And just considering this dichotomy suggests potential approaches that are even more sophisticated than either (it helps if your career depends upon it).
In light of this, by the way, the earlier suggestion that algorithms will always require human guidance to work optimally seems a bit like the suggestion that no engine can function optimally without an expert mechanic to hand (marginally true), or that no plant can flourish without a gardener to tend it (untrue). Whether our algorithms require human chaperones to keep them doing what we would like them to be doing is another question entirely (and one probably warranting investigation).
3) There is an upper bound on the capabilities of algorithms, and we are close to it
We have already proposed that human intelligence is of limited use in predicting the capabilities and behaviours of “artificial intelligence”. It’s probably no less controversial to propose that the various hardware platforms used for the latter (traditional CPU, distributed CPU, GPU, quantum, biological, neuron-on-a-chip…), the organizing paradigms of the algorithms that run upon them (connectionist, geometric, decision-tree, heterogeneous ensemble…), and the formalisms that they may employ (bayesian, information theoretic, heuristic…) are as different from each another as silicon is from carbon (pedants may observe that these elements are in fact rather similar, but then so are CPUs and quantum computers when compared to ice-cream).
Some have considered what the most optimal general-purpose problem solving machine might look like. It is worth pointing out that 1) they haven’t actually built it yet (something to do with something being “uncomputable”), and that 2), having cunningily sidestepped the No Free Lunch theorem in the process, the scope for creativity in any specific approximate implementation (and, inescapably, in its precise behaviour) remains vast.
In short, everything is up for grabs!