On “The Thoughts of a Spiderweb”

This is an excellent article on the idea of extended cognition.  There is one area which I think needs clarification.  Joshua Sokol outlines a distinction between extended cognition, which Kevin Laland and others see as the equivalent of niche construction, on the one hand, and extended phenotype on the other.  But it seems to me that while extended cognition and extended phenotype are compatible, indeed interdependent, the notion of niche construction, with its unavoidable teleology, is something quite different.  So the disagreement is between niche construction on the one hand, and extended cognition and and extended phenotype on the other.

Implicit in the article is also a suggestion of a distinction between knowledge (conceived as “abstract” representations in the “mind”) and information, as when information is what a spider’s neuro-muscular system receives at an intersection in the web.  Thus information is categorically differentiated from the spider having a total representation of a web in its brain, knowledge.  But that in turn suggests the existence of an exclusive and inclusive, that is discrete,  representation in a brain, as it might be in a human brain, of “breakfast” or “Roman Catholicism” or an internal combustion engine.

That does not seem to be how the brain works.  My visual cortex will produce many images for an internal combustion engine, a black and white illustration of a certain shape and surface detail nearly all of which is made up of Dennet’s figment, or what I see when I lift the bonnet of our car, or, in a sequence of visual images, the throughput of a turbofan jet engine; but I don’t have a single, discrete holistic totality of an ICE in my head, merely an interacting community of reomes.

This suggests to me that the distinction between information and knowledge is fuzzy.  And as to information exchange between we humans and our extended phenotype, it is clearly continual.  Without it, we would be merely apes with a big expensive  brain and nothing to do with it.

Symbollocks Significance: A challenge to all anthropologists

The phrase “assign symbolic significance to” demonstrates the imprisonment of academics by the academic phrase.  Some academic’s intellectual apparatus must have at some point elided three concepts, symbol [as in a ping that stands for a non-contingent concept, like fish for Jesus], mind [as in the fatuous quasi-Cartesian concept “fully modern mind”] and significance [as in, in semiotic terms, the complex referent of a sign] and without further analysis the phrase “assign symbolic significance to” became not only a fixture in academic output, but a putative scientific concept, a proxy for what happened when Homo sapiens “achieved full modernity”.  However, “assign symbolic significance to” suggests that there was something, a concept, symbolic significance, already at hand which early humans, including Neanderthals, might assign to things.  This is clearly bollocks.  Just to get things straight, symbolic significance is a relationship dependent on both the sign, a fish, and the signified, Jesus.  Both components are necessary.  If a neanderthal woman hung a pair of eagle claws round her neck, that is assumed by academics, I guess, to be the sign.  But where’s the signified.  By what process can the existence of a signified even be deduced?  When a girl today goes out on the town, does she put a necklace round her neck as an act of attribution of symbolic significance?  No, to put it politely.  Likewise, the neanderthal girl put a couple of eagle claws round her neck because she thought they looked good and that made her feel good.  Like her modern counterpart it is fatuous to assume that she did it in order to attribute symbolic significance to something unknown.

However it is possible that the process to which this vacuous trope refers is one not of adornment or decoration but the act of recognition  that occurred when an Australopithecus saw and picked up the Makapansgat Pebble and carried it to where it was found by archaeologists three million years later [if this purported manuport proves not to have been one at all, the following also applies to when you yourself see a face in a damp patch on a wall].  That proto-person did not have a category symbolic significance in its head which it “assigned” to this pebble.  That is patently absurd.  What it did have is a category face within the wider category my conspecificFace was and is a ping constantly on the alert—my digital camera has got it, which suggests how widely distributed it is across the metaverse.

The logical, direction-of-time sensitive point is that if the australopithecus did that (“assigned symbolic significance to” the stone) it must before it did it have  had the ability to do it, otherwise it couldn’t  have done it.  It already had that ability not because it had previously somehow thought out in its australopithicene brain what it would do if it ever found a stone with a human face on it [it would “assign symbolic significance” to it].  It had that ability before the action because all the components of that ability were already distributed about the zone of the metaverse of which its ideoverse was a node, ready to contribute to the act of recognition.

All animals are alert to things they do not immediately recognise, because each might be a danger.  Is that a shadow of a bush or the shadow of a lion?  We are all, all mammals, particularly alert to eyes fixed on us, eyes of a conspecific, eyes of a jaguar, eyes of a spider; eyes of a pebble, so what?  This early hominin saw this pebble and recognised in it the ping, human face, as rendered by the visual cortex and fusiform gyrus.  “But”, retorts the academic, “a pebble cannot have eyes.  All academics know that.  Therefore this early hominin must have known that too, and reasoned, ‘this pebble cannot actually have eyes, but it looks as if it has.  It is significant to me, not because I saw and still see a human face in its surface, but because in doing so, while in fact I know it is just a pebble, it in some way represents a human face, it is a symbol with “the human face” as its referent, and on this basis, the attribution of symbolic significance, and this basis alone, I am going to pick it up and take it home.’”

What actually happened, I suggest, was more along these lines.  Since by this time the extended proto-human phenotype  was already emergent and diversified well beyond that of any ape [the putative first worked stone tools are of a similar date] and dependent on its symbiosis with an alert organism-centred emerging intellectual apparatus, we can assume that this intellectual apparatus would go through the same information exchange with the object as happened when it was searching for stones that might be good tools, worked or merely found.  It saw a pebble and recognised within its surface a human face.  It would, did, then pick it up and take it home with the rest of its found stones, as a recognised object, therefore worth further attention.

From there one can improvise; this was not a unique experience, over the millions of years people and animals were seen in the surface of rocks, and in time amplified with pigment.  The interest was shared, huge questions arose, how did the face get into the stone, is the rest of the person in there somehow?  Is that all of them, and if not in what form do they extend from the stone (how is their existence distributed? ). There might even have been a sceptic, no, it’s just a stone, it has marks on it, a lot of stones have marks on them, if you look at it one way it looks like a human face, if you look at it the other way up it looks like a small megalith being rolled on two logs.  It’s called representation, when one thing looks like another.  Happens all the time mate.  Tell you one thing mate.  If ever, thousands and thousands of years hence some idiot calls it “attribution of symbolic significance” they’ll be as half-arsed as you are.  Person inside a stone indeed!

The unreluctant fundamentalist

On page 100 of Not by genes alone (Boyd, 2005) Richerson and Boyd say “one of Charles Darwin’s rare blunders was his conviction that the ability to imitate was a common animal adaptation”.  It is today less clear where the blunder lies.  Many species of animal seem to learn by observation.  It is possible that Richerson’s and Boyd’s assertion is based on the fact that that for many species there is not a lot to learn.  Eating grass, where there is grass, is not a complex operation, and lambs and calves do not have to be disciplined into watching adults at work for hours before they take their first nibble.  With lions and wolves the case is different.

If this is an error, it is a small one.  But on page 83 Boyd and Richerson make a far more sweeping and dogmatic assertion under the heading Cultural variants are not replicators.  Here they present Richard Dawkins’ throwaway, he says so himself, suggestion of memes as being replicators, and then quire rightly dismisses it.  But while memes might be the playground of people incapable of the kind of reductionism that science requires, Boyd and Richerson, instead of testing their assertion, accept it as a fundamental truth and build the rest of their thesis on it.  Which I think is a pity, because Not by genes alone is a great trailblazing book, and contains a huge amount of valuable explanation and reasoning.  While I don’t agree with all of it, it’s a must-read for anybody interested in cultural evolution.

On the other hand I think the unexamined assumption that cultural variants are not replicators has been damaging to the pursuit of what is the case, and allowed a whole slew of what, put politely, you could call holistic thought, often wrapped in abstruse mathematics where the basic functions are chimeras, to undermine what would like to be accepted as science.

And so I propose, in a bid to prevent the whole concept of cultural evolution falling into extinction when the funding has wrung it dry, a thought experiment.  Let us suppose that there is a cultural variant at a certain scale, that it does replicate with fidelity, and that it is as numerous as genes.  And let us suppose that cultural evolution is indeed the process of the emergence of human culture, and this process is strictly Darwinian.

The question of what the Darwinian “factor”, certainly not memes or genes, actually is is easily answered, without resort to metaphysics, mumbo jumbo or the summoning of strange and inexplicable forces beyond our ken.  It’s not the principle that’s difficult, but the number of deeply engrained pre-Enlightenment thought processes that have to be disrupted in order for the principle to be accepted.  If cultural evolution is Darwinian, then like all evolution the rate of change is at the lithic plate level, very slow, centimetres per year.  Or maybe it depends on the extinction of weak variants.  As Niels Bohr said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather its opponents eventually die, and a new generation grows up that is familiar with it.”

I have already developed this thought experiment in a horribly discursive hundred and forty thousand words which my wife is fairly brutally helping me to fine down to a more acceptable hundred thousand, and I will weary you with that no more just now.

For the moment, just a few indicators.  I start with a tweet, the irreducible (well, reduced to 140 characters) nucleus of Darwinian theory:

observed phylogenetic heritability, incessant replication with fidelity, an envelope of fractional variation, selection by external factors.

The indigestible bare bones of the hypothesis are here:

that human culture, which is to say the extended human phenotype, is a subset of the mass of the universe that has evolved, in the manner described by Darwin, step by step and alongside and in obligate symbiosis with the hominin organism.

That, it suggests, is how human culture emerged, in obligate symbiosis with the hominin line from about 3.2 million years ago, step by minute step.

This would be consistent with the findings of the Discussion meeting issue ‘Major transitions in human evolution’ organized and edited by Robert A. Foley, Lawrence Martin, Marta Mirazón Lahr and Chris Stringer.

The conclusion of this conference is that the evolution of the human organism and the available coincident archaeology point to a gradual, very gradual development of material cultural forms down and across a vertical and horizontal matrix of hominin species.

And what is such a supposition worth?  Well, clearly it needs detailed explication.  Most intractably opposed to it is the belief that the Darwinian model is dependent on “life” and genes.  Clearly it is not dependent on genes, of which Darwin knew nothing.  It is dependent on, his word, “factors”.  As for life, that life should be crucial to natural selection seems to me an arbitrary assumption.  The material universe is continuous, in all dimensions, and I argue, I hope at least testably, that the Darwinian model applies to anything that conforms to it, and not to anything that doesn’t.  Thermodynamics, chemistry and physics are just as germane as life to a Darwinian theory of evolution.  I have read Nick Lane as well as “On the Origin of Species”.

At the moment the extended human phenotype (in the sense in which Richard Dawkins uses it) weighs in at around thirty trillion tons (Scale and diversity of the physical technosphere: A geological perspective: Jan Zalasiewicz et al).   This has increased by orders of magnitude from a few kilogrammes of flakes struck by the first Australopithecine technologists maybe 3.2 million years ago.  By at the latest around 195kya the Homo sapiens brain had reached its present size and shape, and therefore its present energy consumption.  Meanwhile, the evolution of the lithic technology was preceding at its usual smooth and unhurried rate.  How in the final analysis in energy input and output terms can the existence of this big brain be accounted for?  Chris Stringer has suggested “social complexity”, but social complexity at its widest might mean the totality of the relationship of human beings with each other and their collective extended phenotype, in which case it elides the explanation with the thing it tries to explain; or social complexity could just mean various gradations of immediate, physically mediated social relationships, which a) are in themselves an energy cost, not a source and b) may not do much to distinguish us, let alone Homo sapiens circa 200,000 years ago, from wolves, killer whales or elephants.

Of course energy sources, that is to say fuel for respiration, can emerge in the context of social complexity, at whatever scale you choose to apply it, but this emergence will have been in evolutionary time, not rapid enough to prevent the immediate extinction that the energy consumption of the increasing brain size might incur if in competition with, say, socially coherent groups of probably Homo erectus that were producing Fauresmith stone blades half a million years ago.  The evolution of all life forms suggests that if a species can successfully  do what it does with less energy, the less energy consuming form will be selected.  There is some evidence that the Homo sapiens brain has decreased slightly in size since it reached its maximum around 200kya.  It certainly hasn’t got any bigger.  This suggests that our present brain is the minimum size for doing whatever it was doing 200kya, while any further development would incur costs with inadequate nutritional return.  That has the converse somewhat condign implication that whatever the H. sap. brain was doing 200kya is the same kind of thing, obviously with different content, that it’s doing today; not the same thing, but the same processes, the same highly complex operations; certainly more complex than being able to follow family and friends relationships in soap operas.

I am suggesting a fundamental re-exploration of what the Homo sapiens brain was and is doing, not in neurological but in out in the world material culture terms.