Magdalen Tower, as seen from the Botanic Garden, Oxford, with colour editing, 2006.
DRAFT OF PART OF BOOK: please do not copy without my permission – thanks!
The Philosopher at the Gates of Dawn
1 – Overture
1.1 A DAY IN THE LIFE
1.1.1 Preamble [back]
Well, there was I – feeling thoroughly sorry for myself – in a taxi, having checked out early; and having left behind the remains of the much-looked-forward-to weekend’s conference. Very early on a Sunday morning in mid-July, moreover. I knew fully well that there were yet more potentially interesting papers to be listened to before close of play later that day, but simply could not face seeing any more of my philosopher friends that morning – let alone after the final, leisurely lunch.
I should add, for those who do not know, that this conference – it takes place every year – is called the Joint Session of the Aristotelian Society and the Mind Association (to give it its full title), and is the largest and the most important of its kind in the UK. There are many other, more specialized conferences some of which take place in the same town (different each year) either just before or after my conference, and arguably the papers delivered there are of a higher standard. But if you want to see and be seen, then the Joint Session is a must. Strangely enough, not many professional philosophers actually bother to go to it. They are simply too exhausted, having just finished the academic year. (Examination Boards can be wearying, and tend to eat into valuable research time.)
I should qualify this. Younger want-to-be professionals – who tend to be desperate to get into the system (and there aren’t many decent jobs) – tend to make more of an effort. There are reduced rates for such people, but not much else.
So, why the Angst? It is not that I dislike my colleague-friends from other universities: quite the reverse. It is possible, of course, that I envy the fact that they nearly all tend to have a much higher rank than I have (senior lectureship, very often full professorship, and so forth) despite being more than ten years younger than me (and they are still oldies, statistically speaking – I was born in 1954), but I do not begrudge them their honours. I confess that I sometimes resent my own seeming unimportance, though I do not let other people’s judgment of me (or lack of it) affect my thoughts and feelings too much.
My own personal friends (and I have many) still like me, and declare most sincerely that they admire my work (or probably would do if they could only find time to read it). Since I don’t read much of what they write, and for the same reason, I don’t suppose that I can complain.
No, the problem concerned one particular individual who shall, for obvious reasons, remain nameless. It is not that I fear the British libel laws, ferocious though they are, or so I am told; it is just that descriptive content is often so much more evocative and to the point than a mere proper name, especially since it (the proper name) is unlikely to mean anything to most of my readers who are unlikely to be professionals in the business (and are unlikely to remember names anyway, even if reminded).
He, the man in question, was not himself a professional philosopher in the standard sense (i.e., someone whose official duties consist primarily in teaching undergraduates, supervising doctoral candidates, engaging in endless and ultimately pointless administrative work, research in philosophy, and occasionally publishing unread and unreadable material, and so forth), but himself a publisher: more accurately, the commissioning editor of a very well-known and prestigious university publishing house. It is not that I have anything against him personally, but I remember all too vividly the kind of purple prose (as it is sometimes called) that I had repeatedly submitted to him for publication in the past, and (when sanity returned) withdrew from his consideration; and, in consequence, could not bear to meet him in the eye, however sympathetic and supportive he might be in reality.
In hindsight, I guess I need not have bothered to be so wary. Such people have seen it all before, and might even welcome attempts to push back the boundaries of what can be published in influential, but inevitably rather conservative outlets; but I was in a grumpy mood, as I mentioned. The previous evening, for example, whenever I saw one of my friends and wished to talk to her (if only to reminisce) I found that she was always with a group of others who invariably (and most prominently of all) included this particular gentleman. So, conversation was impossible except for the briefest exchanges of glances and remarks – and not necessarily in that order.
Still, I was getting old; and there comes a time when one must simply accept that one’s professional career in coming to a close. I had published a few interesting articles on a variety of topics (not, for me, narrow specialization), and even a full book which the gentleman I have been talking about had, after some considerable deliberation, declined to publish (though he was not really to blame for that). But I was not as well-known as my earlier very successful student career would reasonably have predicted. That hurt. Okay, at a certain time of life, with retirement inevitably approaching, one tends to look back and ask: is that really all? Was that really me? I dare say that that is right and inevitable; but, to repeat, it still hurts. A lot.
You see, I still harboured the thought that there was one final outstanding work left for me to publish, and to massive acclamation; what all my previous work had been leading up to. And yet there was also another part of me, more cynical and yet probably wiser, that knew this to be mere wishful thinking.
After all, alas, we all think that we are really special in a way that others are not: but obviously we cannot all be right here; and for elementary logical reasons. (I am a professional logician, by the way; trained at Oxford in mathematics as well as philosophy. Check me out on Facebook, if you want.)
What consolations are there? People kept on asking me what I was planning to do in my retirement, but the formulaic answers I kept giving fooled nobody, least of all myself. I am also an amateur musician (keyboards) and quite competent songwriter, for example, and I could see no reason why I should not reinvent himself here; though it is painful to notice that most successful musicians of that kind tend to be considerably younger and more energetic than I am.
I suppose that music has always been with me, and I find that my stream of consciousness consists not only of a rushed wordy narrative, but also an identifiable sequence of musical accompaniments. However, I also find that, although I can convey the words of my thoughts (in the obvious way), the music is less easily conveyed. Some people try to write it all down by adding parenthetically here and there remarks to the effect that a certain musical phrase is to be keyed in, but there is a limit as to how far that can be done. One big problem is that musical tastes differ in ways in which the interpretation of words does not. Instructions from me (the author) to you (the reader) to think of a specific song at this moment are more likely to obfuscate than to clarify the message that I am trying to convey. So, with one notable exception (coming up shortly), I shall not tell you precisely what I heard, how I heard it, or why it should be so significant when I try to convey some of the ideas in my mind – so that they enter into your mind for you to appreciate or otherwise evaluate (as the case may be).
When you are tired and old, essential defeat can be more easily accepted than a glorious but distant victory to come; and I had exhausted all attempts to understand why the occasional advertisements for funding for Early Career Researchers and, alternatively, for Mid-Career Researchers (as we call them) were not punctuated with something positive for the Late Career Researchers, even though there logically have to be many people in that category. As the autumn of my life approached, along with the lengthy, pension-guaranteed period of endless sherry and slippers (I come from a genetically well-endowed, long-lived family), I reflected with a sadness that bordered on a kind of extreme serenity that one cannot really escape one’s destiny however much a certain conception of free will commands otherwise. So be it, I said.
Then, as I approached the city’s only railway station, I heard the song I mentioned above on the taxi’s radio. And everything just changed.
1.1.2 Sixpence none the richer [back]
Sometimes, a change in one’s whole attitude towards life can be explained and even predicted; but sometimes one is confronted by something more akin to a religious conversion, the effect of a transcendentally caused and therefore inexplicable force. I was, on hearing what I heard, wildly dazzled to the point of extreme enlightenment, and yet was sufficiently composed and respectful to my environment that I did not exactly fall off a Damascene horse (there was, in any event, no room for one in the modestly sized taxi), and I could not even, at the time, name either the song or the band that performed it; though I had certainly heard it many times before, albeit not recently.
It was, as I later discovered, first performed in the very last years of the previous century and reflected a kind of innocence that can only be achieved by a group of musicians whose members self-identify as alternative Christians, despite being born and raised in America’s Bible Belt (as it is rather oddly called). The song’s title consisted simply of the instruction to kiss the singer, who was (and still is) a very beautiful lady – as the song’s official music video reveals most clearly. In keeping with my general unwillingness to name names, I shall not give you directly the name of the band, but merely inform you that it is a phrase taken from a fairly well-known English author who converted to Roman Catholicism whilst writing his famous (and televised) short stories about a fictional priest who was also an amateur detective of considerable talent. I need say no more; since you, dear reader, are probably reading these words online, so you can easily find out whatever you need to know by a simple internet search: which obviously includes a rendition of what this song sounds like. Listen and love.
It is, to repeat, well worth listening to, being both very memorable yet wholly inoffensive (though I have been told that the lyrics contain many double meanings that American teenagers tend to understand better than do their elders).
Anyway, I made it home (about a hundred miles or so to the south), with this tune still lodged in my head – doing its work gradually, but very, very effectively. And what happened next, I hear you ask with bated breath? Well, to begin with, not a lot if truth be told. There were a few more literary false starts, a Covid pandemic which seemed to turn everything upside down, and the repeated feeling that my parade was always destined to be rained upon. Then, at last, came my eventual and long-postponed retirement – at the glorious, first-class age of 70! With the need to please others gone, I was finally able to get around to the most enviable task of pleasing others. Having previously been too busy to do any work, I found that I now could fill every hour of every day with activity that would have been impossible before deciding to stop drawing my salary, and instead draw from my (actually, quite generous) pension.
1.1.3 A plethora of ideas [back]
But what was I to write on? I felt then, as now, that not a lot of literature can actually be produced until this question receives a satisfactory answer, and yet a problem emerged. Where previously, I had suffered from what is sometimes called a writer’s block, I found that now I suffered from the opposite problem: a veritable cornucopia of interlinked ideas all trembling to be put on paper (I still harboured an urge to contribute to literature in the traditional format, with one page followed by another).
Yet, we live in an intellectual environment that demands increasing specialization. As the old joke about the man with earache illustrates, it may be hard to get a competent doctor to attend to the problem, since the pain is bound to be in the right ear and yet he (the doctor) is most probably a left-ear specialist. Important and interesting connections between different intellectual environments seem to be lost, and unless (like the brother of another famous English fictional detective) one’s speciality is omniscience, it seems hard to understand them unless one spreads oneself so thinly that only superficial understanding can be guaranteed.
So, what’s to do, as they say? A long book about virtually everything is the clear answer; yet solo authorship of an encyclopædia is a rather lengthy and unrewarding task – and who really wants to compete with the many authors of Wikipedia? Or perhaps a simple autobiography that includes all of the interesting content of my near-to-overflowing mind, perhaps? (But without the lewd bits, added the voice of wisdom – but we shall worry about that later.)
Naturally, I consulted my soon-to-be-former colleagues in the department at which I worked, and they were all eager to give their advice. In particular, my estimable Head of Department (and line manager, as we now call such people) – who had already taken a considerable interest in my evident reluctance to retire at a normal age – expressed an uncharacteristic forcefulness in what I could usefully write (and not write) about. So here goes, as they say.
1.1.4 Welcome week [back]
It was a famous Russian author, whose name I shall not mention, who observed that the meaning of life consists primarily in the day-to-day living of it. This is what the hero of his much celebrated, though extremely lengthy, novel apparently learnt from an unreflective Russian peasant of the time (i.e., the era of the Napoleonic wars). My professional life also revolves around the seasons (I use the present tense, because the UK university academic year still revolves like a law of nature, even though my own part in this play belongs to the past tense), so I shall, at least to start with, talk about what used to be called the Student Induction Week.
We are talking about the first week of October, when the fruit-pickers from more southerly climes carefully pick the grapes that make their delicious wine: the wine that, along with love and laughter, makes the world go round. It is also the week when the new women and men arrive, fresh from school and with their eighteenth birthdays legally in place: full of anxiety as to their newly found adulthood, but full of hope that their mothers will still wash their clothes when they return home at weekends, penniless, from their weeks of learning this and that (mostly that and not this, it should, in all honesty, be stated) at what they all now call a uni. (It used, in my day, to be called a varsity, but word-endings, along with posh accents are so yesterday. Or so I was told when I made the relevant inquiry.)
Terminology can be so confusing, especially to the troubled adolescent mind, but I was told, by a distinguished member of what we are now encouraged to call the Professional Services (or PS) Staff (when I first started out my working life, they were referred to as secretaries, but no matter), that the induction-word is no longer used. After all, it (the word) could refer to a sort of hard-to-explain electromagnetic attraction; a dodgy, scientific method of reasoning from the fixed past to an uncertain future; a much more reliable mathematical sort of reasoning (look it up); as well as a sort of welcome. Understandably, to avoid further adolescent confusion, it (Week 0) is now called Welcome Week, and the philosophical induction-word relegated to my Critical Thinking component of the PHIL 100 module to which I used to contribute. At least, that was once the case, but my replacements (we really do have some) in the Department do not include a trained mathematical logician (which is what I am); so I am unsure of what is now actually taught to first-year philosophy undergraduates, only about half of whom have studied the subject at A-level (in my day, such an A-level did not exist, and we had to make do with what was called the General Paper). Still, I could bore for England talking about the past, as our excellent PS staff have occasionally told me (they just love a good laugh: which is just as well).
Anyway, the past is no more, but the writing must continue; so, I shall proceed by reminiscing about how the Induction/Welcome Weeks used to proceed when I was fully in the saddle, as they used to say.
1.1.5 Academic dishonesty [back]
Now, I may have mentioned that the role of Philosophy lecturer includes more than just teaching teenage (or near-teenaged) students in classes (either large lectures, or smaller seminar-groups). We also do what is sometimes known as administrative work, or just admin, some of which involves student affairs of various kinds. For example, for several years until my semi-retirement last year, I was what is now known as the Academic Integrity Officer for the Department. This sounds ominous, and it is. It primarily concerns what is called plagiarism, a word that many have come to dread. Essentially, the problem is this. At a university, unlike a school or sixth form college, students are required to write longer-than-usual essays (well over a thousand words is usually required, though it is usually the maximum length that we police – we strive to teach concise writing habits), and – and this is the point – the work submitted for assessment must be the student’s own. Although each student is expected to read, and to learn from, other writers – people who may well be tutor-recommended experts in their field, but who may also be obscure writers that only the student essay writer herself has heard of, their contribution must, just must, be very carefully referenced. This means putting other people’s words into quotation marks, and saying precisely just where the quotation comes from. Ensure that your work submitted for assessment (whose grade or mark contributes to your final degree class) is very, very properly referenced, so that you do not, despite your desperation to get your essay submitted in time (we are brutal about deadlines, unless you are sufficiently articulate to come and tell us of a good reason why you can’t do it), get into trouble. Specifically, you don’t want to get ‘done for plagiarism’, as we typically put it. This happens when our software tells us that part of your work appears elsewhere online, and therefore cannot be your own work.
Okay, so what is originality, we plead? And can students not just make mistakes? Well, I have met a few villains, contemptuous to the last, who challenge us to do something about their gross theft of others’ material submitted as their own work – for which they expect the first-class degrees that their Daddies paid the University so handsomely for. University funding (more accurately, non-funding) is a subject worthy of a tragic three-volume novel in its own right, however, so I shall not go down that particular rabbit-hole (at least, not just yet). I just mention here, as I routinely used to mention to our newly arrived students, that the ultimate penalty for gross, unrepentant and repeated plagiarism is to be exploded into outer space: in other words, expulsion from the University without any chance of a decent reference. Daddy, beware.
1.1.6 The academic integrity officer? C’est moi! [back]
Of course, especially with new students, we get cases where students simply have not yet grasped how to do things properly; and here we are much kinder. Technically, it is the University’s own very senior body, known in my establishment as the Standing Academic Committee, which oversees all cases of alleged plagiarism; but initial investigation is typically handed over to the departmental Academic Integrity Officers. Cases where something is visibly wrong (we have software, called Turnitin which detects whether student essays and exams – everything, since the days of Covid 19, now has to be submitted online) contain material to be found elsewhere online, are dealt with by holding an informal meeting which the student under investigation is invited to attend. She or he is allowed to bring a friend (a member of the University, typically someone from the Students’ Union), but the meeting will proceed and a verdict delivered even in the absence of the student accused. Also present is the tutor in question, or at least the convenor of the module in question (hourly paid tutors don’t get paid to do this, but are cordially invited to attend if they want to), a member of the PS staff who takes the minutes and, finally, myself who chairs the meeting having firstly established that there is a prima facie case to answer, as we say in Latin.
Students invited to attend these meetings are occasionally arrogant and boastful about what they can get away with, as mentioned, but the overwhelming majority are just terrified. Terrified, perhaps, that someone will tell their parents (on this, we reassure them; they are adults, if only in law, and that the same law will send any tutor to prison if they talk to a student’s parents without that student’s express permission. I exaggerate a little, of course, but students need to be reassured.)
Also, they can just be terrified by themselves and by what they are apparently in the process of turning into: liars and thieves – for how else can plagiarists be described?
This is sometimes exhibited by a hopeless denial of the reality with which I confront them: an unambiguous Turnitin report. I am often sorely tempted to lead them astray here, to let them blunder from one inconsistency to another, but only in kindness: to assure them that I have only their future careers in mind; and that I wish only to dissuade them from a life of crime, since they patently have no, repeat no, talent for dishonesty whatsoever. This, together with the offer of a tissue from a box that I keep discreetly hidden, might help to turn a howling flood of tears into relieved giggling. However, I have always managed to resist this temptation; I fear that the quiet, but ever watchful member of the PS staff might exercise her right to minute the meeting as she observed it, and not as I did. This could prove to be a slight complication.
I should add that these meetings with our alleged miscreants tend to follow a certain pattern: firstly, an interview with the student present where the facts are established (and with myself as judge, the module convenor as counsel for the prosecution, no counsel for the defence as such – since the facts themselves are nearly always pretty plain); and secondly, our deliberations – where neither the student in question nor her representative should she have one with her – is present. It is here that I encourage the PS staff member to contribute fully, and for two related reasons. The first is that, it is nearly always a question of whether the student is dishonest or merely academically incompetent (and a bit naïve) that is at stake; and PS staff are just as capable of diagnosing dishonesty in a fellow human being as an academic, even though the latter has much higher academic qualifications. The second is that the student in question very often has personal problems which the tutors may not know about, but the PS staff quite probably will. After all, they are paid to be in their offices in the Department during working hours, and are the first people students see when they come to the department (as opposed to their hall of residence, should they live on campus) to seek help. (Tutors are often elsewhere, and have a highly undeserved reputation for being no more interested in student welfare than their narrowly defined teaching and research duties require them to be.)
If a student, female or male, has a bad personal problem, for example with a relationship with another student, then it is often the PS staff who first get to hear about it. You know exactly what I am talking about; and the problems can be very serious, though this is not, of course, peculiar to young people, nor indeed to a university environment. But more of that later.
The third part of the plagiarism meeting consists of relaying our decision to the student, who has now been invited back into the meeting. It is here that the student can make a personal statement and apologise for what they did (assuming a guilty verdict, which is normal; should there be a mistake, the problem will have typically sorted itself out much earlier). There is then the formal disciplinary part of this section of the meeting, where the rules are read out, and the student warned of the horrendous consequence of a repeat offence.
Then, finally and most importantly, there is the counselling part of the meeting, where the student is recommended to seek help from Faculty staff who specialise in certain kinds of learning struggles that can lead to plagiarism. The whole process is meant to be supportive, not condemnatory, and this usually works.
Of course, life is not perfect, and problems with it continue even when you eventually cease to be a student at uni. Staff also have their problems with more senior staff, and what we learn from our students can help us with our own battles with authority. Such things happen. I nevertheless feel some pride in what I can achieve in securing justice within my own place of employment. I can only hope that my students learn something nice from having been under my care for a while.
1.1.7 The queen of the sciences [back]
So much for that. Now, among my other admin duties, I have sometimes been required to give a general welcome to new students, sometimes just the ones studying philosophy, sometimes those who are doing anything that the Department teaches (political science, international relations and religious studies, as well as philosophy; at least it was thus in my day). When doing so, I briefly warn them about plagiarism, explain that there is nobody more terrifying than the departmental Academic Integrity Officer, even if he bears an uncanny resemblance to myself (albeit wearing a different hat), and inform them that there is always a handful of first-year students who don’t appear to hear what I am saying and then get done for plagiarism. Just make sure that you are not one of this handful, I say, and it often works. (It is a fact that it is often the postgraduate students, who really ought to know better, that are the worst offenders.)
Any road, as we say up north, there is (finally) my duty to explain to students just what philosophy is, and to try to sell it as a first-year academic option to those who have not yet made their minds up. This can matter to me for more than one reason, since student numbers affect our finances, and how the internal economics of the department works. Obviously, if a department is multi-disciplinary, as many are these days (large departments are cheaper to run, if only because they apparently require fewer PS staff to function: again, you have not heard the last of this), then there are bound to be internal tensions between the component disciplines (each member of staff tends to identify with no more than one discipline, and thereby gets regularly accused of what is sometimes called ‘silo-thinking’ by a certain kind of university manager – yes, we have them).
Anyway, to cut to the chase: what is philosophy? Obviously, this is what this book is primarily about, and you might think that it is about time that the matter was approached directly. Various, snappy definitions have been given, but they are not especially satisfactory and this is unsettling. Other disciplines do not seem to have this problem. Thus, we have the following definitions, not especially accurate or comprehensive, but quite sufficient to explain to your beloved aunt or uncle just what it is that you are studying at uni. As you can see, philosophy is something of an outlier here.
Accountancy: the study of how to count beans in a convincing way
Archaeology: the study of remote human remains
Architecture: the study of how to build things nicely
Art: the study of art (who would have thought it?)
Biology: the study of living things
Business studies: the study of how to run – and understand – a business
Chemistry: the study of what things are made of
Computing: how to program computers and understand how they work
Economics: the study of how money works
Education: the study of how to teach well
Engineering: the study of how to make things actually work
English: the study of the English language and its literature
Geography: the study of where things are and what goes on there
History: the study of the human past
Law: the study of law and legal systems; and how to be a lawyer
Linguistics: the study of language(s) as such
Management: another name for Business Studies (I think)
Mathematics: the study of numbers and related abstractions
Medicine: the study of how to heal the sick
Modern languages: the study of (one or two) modern foreign languages and associated literature(s)
Physics: the study of the ultimate nature of the world around us
Politics: the study of government and other political arrangements
Psychology: the study of how minds work
Sociology: the study of social arrangements
Theology: the study of God and religious beliefs; and of how to be a priest
Philosophy: the love of wisdom (that, by the way, is what philo + sophy literally means)
Such a definition (of philosophy) is a little vague, even compared to the other definitions given, and hardly does justice to the fact that, even though there is massive disagreement about conclusions, there is a broad consensus of what constitutes philosophical speech and writing. Why is this?
1.1.8 Disagreements and other difficulties [back]
As far as the disagreement is concerned, one popular – and very good – answer is that disagreement about important questions should be the norm. It sometimes happens that we make sufficient progress in a given area of thought that systematic, agreed answers become available. However, we do not then say that philosophy has now made progress; rather, we say that we are no longer doing philosophy. A new discipline has been born, usually referred to by a Greek word followed by the suffix –ology (itself a Greek term that means something like: ‘the study of’). Hence, psychology, sociology, biology, criminology and so forth. Until comparatively recently, there were no such ologies, or special sciences, just philosophy. And the word science itself comes from the Latin word scientia, and simply means an organized body of knowledge. Physics, chemistry and biology are uncontroversially labelled sciences, sometimes as the natural sciences or hard sciences. Other, more questionable candidates, include psychology, sociology and economics: their practitioners point to their experimental nature (they do not, as do philosophers and pure mathematicians, rely solely on pure thought). However, most historians and philosophers reject being labelled as scientists, even as social scientists (let alone, soft scientists). Nevertheless, lawyers insist that the law certainly ought to be (and arguably already is) a science, even though law is something made, not discovered.
Theology certainly used to be thought of as a science (the systematic study of God); and so, infamously, was witchcraft. We now think of the latter simply as malign nonsense, but it meets many of the criteria that we tend to set for being a science, and this intrigues – and sometimes alarms – philosophers of science.
What tends to mark out the sciences is that they all work within a shared group of basic assumptions which are usually unquestioned. This sounds like fighting talk: is it being said that scientists are all dogmatists, and that they succeed in commanding widespread agreement only by refusing to think critically about their initial axioms? Well, not quite. What I mean is instead best explained by a few examples.
Thus, supposing that you are interested in prime numbers (i.e., positive whole numbers greater than 1 which cannot be obtained by multiplying together two smaller such numbers), and wonder whether there are any between two given huge numbers. (You would be amazed at how important that sort of question is, if only to guarantee banking security. Yes, really.) A mathematician is the person you need to see: he will able to tell you, and demonstrate conclusively that he is right (should he be required to do so).
But suppose that you persist and say that you are not convinced. I just don’t believe that there are any genuine prime numbers at all, ha, ha (you say)! In fact, I suspect that numbers in general do not really exist. These Arabic numerals are just marks on paper, signs which can usefully be manipulated (to be sure), and they yield a lot of useful results, but there are nevertheless no such things as numbers. Not really. If you doubt this (you might continue, flushed with battle), ask the simple question: where are they? How come we can’t see them, weigh and measure them, kick them around when they annoy us (as they frequently do), and so forth?
Now, if you do this, and the mathematician is sufficiently patient not simply to kick you yourself, and down the very hard flight of stairs to the basement, what he may do is refer you to a different specialist. Arguably, a psychiatrist; but failing that, a philosopher, should his university still have any on campus. It is the philosophers, and not (for the most part) the mathematicians themselves who examine certain highly theoretical assumptions about mathematics, who concern themselves with such things. True, it was a mathematician, Leopold Kronecker(1823–1891), who famously declared that God created the integers (1, 2, 3, etc.), and that all else (e.g., fractions and decimals, and so forth) is the work of Man; but the fact is that even questions such as whether the square root of minus one (known, in the trade, as an imaginary number) really exists or whether it is just an occasionally useful fiction, tend to get ignored by working mathematicians. Such investigations tend only to hold things up. Hence, the impatience with the philosophy of mathematics, as it is called.
Likewise, if you were to ask a physicist what sorts of things exist in the physical universe, she will be able to give you a useful answer. She will talk about electrons, gluons, and other elementary particles; and how they combine to produce the more familiar protons, neutrons and so forth that make up ordinary atoms; which in turn make up the molecules studied by chemists, and so on and on. However, you might get a more nervous reaction if you were to ask a more basic question, such as: what in general exists? After all, her friend, the mathematician, together with Campus Security, will probably have already warned her that you (yes, you) are still wandering about the place asking insane questions. She might, of course, give a professionally confident answer to the effect that the only things that really exist are particles, waves, and related such things in space and time – and that that is it. However, she is unlikely to remember to mention the wretched numbers mentioned earlier, and which she herself uses a lot; and will probably not mention at all such things as love and laughter, pain and pleasure – even though she may experience all of them herself, and to quite a large extent (given the uncertainties of academic life, nowadays).
You might draw her attention to these omissions, of course, by way of making polite conversation; but most likely you will get a similar reaction to that of the mathematician, and just get kicked downstairs. Again. Such is life, and we academics are busy people with lives of our own to lead.
As mentioned earlier, human knowledge has a tendency to compartmentalize, and generalists tend to get replaced by specialists who, as the phrase goes, know more and more about less and less. Unfortunately, philosophers are often no exception to this rule, and the subject is divided into subdisciplines (such as logic, metaphysics, ethics, epistemology, aesthetics and so forth – about which more later) which requires specialization and hence a narrowing of attention. Nevertheless, they do so less than other academics, and this is clearly beneficial, because we are often confronted with deep questions that traverse disciplinary boundaries.
To take one example: is there life after (bodily) death? Unless you are completely mad, you will have wondered about this at some time in your life, and yet be very unsure how to proceed to answer it decisively. You might be a committed physicalist, of course, (and yes, I said ‘physicalist’, not ‘physicist’) and think that there is nothing but particles in space and time, in which case you will answer this question with a confident negative. You die when your component particles disperse: and that is that. But then you might wonder about where human consciousness is to be located; and whether we are just soulless things, or rather something else beyond an organized lump of biomatter. Pursue these matters further, and you will start to ask how the conscious soul (which is supposed to be immortal) interacts with the unconscious biomatter (which is not). You may even learn that thinkers from the 17th century were thinking these thoughts long before you came on the scene. In accordance with my self-imposed rule not to mention any names, I shall not say who it was who thought these thoughts then, and why he should be widely supposed to be the founder of modern Western philosophy; still less why he should be same individual as the enormously famous mathematician (he united algebra and geometry, as it happens) who did more than anyone else to put the above-mentioned imaginary numbers (such as the square root of minus one) on the map. At least, not yet.
My point, rather, is that if you want to investigate whether there is life after death, you need to look at many ostensibly very different branches of knowledge: physics, biology, psychology and theology, to name a few. If their respective experts refuse to talk to each other, except out of hours, then not a lot will happen; and the deep questions that really worry us will be neglected. But philosophers tend to be different.
One way in which we are different is that we tend to meddle. I listed earlier about twenty or so other disciplines that find their way onto the university curriculum, and it is a remarkable truth that if you prefix their names with the phrase ‘philosophy of’, you nearly always get the name of a branch of my discipline. Thus, we have philosophy of mathematics, philosophy of science (where the status of witchcraft might be debated), the more specific philosophy of physics (where the weirder aspects of quantum theory are examined, for example), and so forth.
We also have the philosophy of history: where we discuss the difference between facts as such and an historical interpretation or evaluation of a set of facts; what historians tend to try to explain and how; whether historical explanations are similar to those in the natural sciences – and, indeed, whether history can usefully be thought of as a science at all (a point I touched on earlier).
It is usually here that I intersperse what can be a rather dry summary of information with a few jokes. One joke – which usually goes down well – is that there is all the difference in the world between the philosophy of history and the history of philosophy. You just need to put your thinking cap on to see the difference. The first is a branch of philosophy and it is about history; the second, equally obviously, is a branch of history and it is about philosophy. True, the second tends to be studied as part of a philosophy degree course and is largely ignored by historians, though they do talk about intellectual history and used to talk about the history of ideas, a new discipline invented by a philosopher. It is perfectly obvious, I repeat, albeit with the twinkle in my eye that the students tend to come to notice. It is, to continue the joke, rather like explaining the difference between the policy of administration and the administration of policy, as a well-known television sitcom from the 1980s used to satirize, and to great effect.
1.1.9 A vision in white [back]
I often use these pauses for jokes to look more attentively at my audience than is normally feasible – especially, when there are lecture notes, PowerPoint slides and other learning materials to contend with (there inevitably are). I find that the 18-year-old newcomers tend to look younger and younger each year, but they are still, thankfully, missing that air of world-weary cynicism that they acquire when they leave their teens and discover that there is not really all that much that we can do to force them actually to attend classes. The last year that I was invited by my Head of Department to give this introductory talk (I did not know then that it was to be my last time, of course), I noticed that there was, oddly enough, something not quite right about the audience, though I could not put my finger on exactly what. Then it struck me. Normally, I recognize the faces of those students who regularly attend my classes, even though I find (I blush to confess) that over the years, student faces tend to blur into each other in such a way as to make it impossible to remember their names. An exception to the vague familiarity fact, of course, is the beginning-of-the-year induction talks when the students are completely new.
And yet, and yet … there was one young woman (do not, whatever you do, refer to them as ‘girls’: all hell breaks loose if you do) sitting near the back of the class who looked oddly familiar – as well as just slightly older than the others.
I had originally thought nothing of it when I first noticed it; but the fact had stuck in my mind, for some reason.
Anyway, I was about to continue with my talk about the philosophy of this versus the philosophy of that, when I distinctly heard this person clear her throat in a meaningful manner. Other students also noticed, and starting looking away from their laptops (an unusual phenomenon in itself, it might be added).
I won’t say that this lady had a natural air of authority about her; she was still a bit too young for that sort of thing. And yet I felt (a) that she had something she wanted to say to me, and (b) that it would be wise simply to let her say it. I followed my own advice (I often do), and heard her say the following.
You say almost the same things every year, but possibly you don’t notice. You may think that it doesn’t matter since your audience is always different each year, but actually it does.
It does what exactly, I said (trying to play for time: this interruption was most unusual, and I feared that I was about to be knocked off my perch, as the phrase goes).
It matters because, in case it escaped your notice, I have been attending this introductory talk of yours for the last three years (continued my interlocutor). In my first year, I did indeed feel that I was hearing something both new and inspiring. But by the time I reached my third year, and was thereby able to compare and contrast what you say we are going to get and what we actually get, I found myself becoming increasingly disillusioned.
Can you give examples? I said this with some confidence, as the question usually tends to stop the clever-clogs type of student fairly swiftly. However, I merely received the retort: examples of what, exactly?
With the ball placed expertly back in my court, I had no chance but to respond in kind. Examples of things I said in my annual introductory talk to new potential philosophy undergraduate students which came to disillusion you in some relevant way, I said. It was a bit of a mouthful, but I felt that my sparring partner probably deserved it.
I mean, said my partner (by which I mean my opponent, for this was definitely a singles match, not a mixed-doubles, as they call such things at Wimbledon), for example the fact that philosophy combines with virtually every other discipline (as is evidenced by feeble attempts to offer joint honours degrees with every conceivable subject offered at this University). These combinations are useless, artificial and attract virtually no students, as you of all people ought to know, she added vehemently. She then paused for breath, and I noticed that she had indeed been breathing in and out rather heavily (I do occasionally notice these things, as it happens). You may remember that it is around about now that you introduce your pièce de resistance of jokes, she went on having temporarily recovered her breath and her composure.
And which joke might that be? I lobbed her a crafty shot that just skimmed the net.
The one where you mention (in your own words) that, in addition to there being a philosophy of engineering (which is something to do with what is often called continental philosophy – the continent in question is Europe, by the way, not Antarctica) and a philosophy of education (just don’t ask), there is also a philosophy of philosophy. Yes, you heard me right the first time, a philosophy of philosophy: indeed, according to some authors, more than one. Hence, the indefinite article. She recited all this without a pause for breath, but without a lot of enthusiasm.
Well, I confess that I always was rather fond of that particular comic gem, though I don’t think that this vision in white’s particular method of delivery did it justice. But my feeling of gross humiliation (through having been made to sprawl headlong in a desperate attempt to return the ball, figuratively speaking) was somewhat assuaged by my sudden recollection of just who my opponent might be. I do not remember her real name (alas); but I recalled a student from my final-undergraduate-year module on ‘Logic and Language’ who showed a precocious interest in what mathematical logicians call the self-referential paradoxes. These, roughly speaking, are paradoxes which emerge when you realize that you, the commentator, are actually a part of the group that you are studying and are consequently open to unexpected attack. From yourself, of all people.
A famous example of this is what is called Gödel’s First Incompleteness Theorem (or just Gödel’s Theorem). This (first proved in 1936) says, roughly speaking, that there is more to mathematics than there appears to be, and that some things in even elementary arithmetic can be clearly seen to be true but cannot be proved to be true according to any sane definition of a proof. Philosophers have managed to read all manner of fascinating consequences into this result, notably that we humans must have free will after all, or (at any rate) cannot be the simple machines that many thinkers have supposed us to be (more on that, later).
Fascinating stuff, to repeat, but also very difficult to understand properly for those without a solid mathematical background. An A-level in the subject shows willingness, but is in no way sufficient. When I discuss it in class, I do not know what (most of) my students found to be the more intriguing: the fact that it was these highly abstract investigations that led to the invention of the programmable computer (the most famous member of the code-breaking team in England during the Second World War was a gawky mathematical logician and gay icon, as you probably already know); or the fact that the name of the man who first proved this theorem sounds just like /girdle/. At least, this is so on the eastern side of the Atlantic. They are more prudish on the other side, which became the man’s adopted home (he was an Austrian Jew, and did not get on too well with the local politicians), and have no desire to associate such a titanic figure of formal logic with a female undergarment.
Anyway, to return to the matter in hand, this nameless nemesis who kept puncturing my dreams of being able to get through this introductory lecture without provoking a riot, was still glaring at me; only now she had risen to her feet to do so more effectively. She then said, in a voice whose pitch and volume had both risen a couple of notches, that I was probably wondering what she was doing in the introductory session in the first place, given that she was clearly bored out of her mind. Well, well, I thought to myself, she is psychic as well as entertaining to look at (a vision in an ordinary sort of white, though without the athleticism of a tennis professional, to the best of my recollection).
I guess my own telepathic abilities do not include the ability to keep my thoughts to myself too well, because the upshot was that her cheeks, hitherto rather pale and wan, developed a rosier hue about which I had somewhat mixed feelings. Possibly cause and effect, though possibly not.
Well (she said), the answer is that thanks to student debt I have to keep body and soul together somehow. If not through the pineal gland (as you call it), then through earning what little I can get by agreeing to mentor new students arriving into your system. This means listening to this freaking lecture! Again!
1.1.10 Reading the riot act [back]
I am not all that well up on modern slang, and I may have misheard a crucial word of hers (her diction was not helped by the fact that her voice had now risen to a scream). However, it was now clear that classroom discipline, not normally a problem when one’s pupils are over 18 years of age, was becoming a serious issue. There was, after all, the welfare of the other students in the lecture hall to be considered, and they were beginning to show definite signs of alarm. Yes.
You see, whereas they (the other students) had originally been rather British about the whole thing and had affected not to notice anything particularly untoward going on in the room, they were now fully engaged in the proceedings – with half of them looking pointedly at me, and the other half looking with some amazement at her.
The restoration of classroom discipline proved, it turned out, to be easier than it appeared; even though She Who Must Be Feared had somehow managed to glide effortlessly from the middle of a row near the back to a standing position in the front, just a few paces in front of me. (Exactly how she managed it, I shall never know.) Anyway, the upshot is that the others could thereby look firstly at her, then at me, then at her again (and so forth), without having to turn their heads too much. Just like Wimbledon, in other words. Easy, peasy pie (as we used to say in the school playground).
Naturally, I thought it best to allow her to continue her attack, if only to gauge her limits more carefully. She duly obliged me, and asked whether I was now about to tell the class something about what is called the ‘theory of knowledge’ (or ‘epistemology’, if you want to show off your knowledge of the ancient Greek language). Just what useful new knowledge is it that a degree in philosophy is going to give the (literally) poor new graduate, she asked the class rhetorically? Well, I’ll tell you (she continued). It is to learn that you know even less than you originally thought you did. For all you know, you might not be listening to me at all. You might still be asleep in bed, having dreamt at length about me. Well, I hope you got your money’s worth!
Yes, indeed, I said. Let’s hope they all did, I added, in case my meaning had not been made entirely clear. This, I think, rather knocked my little lady off her perch, for she went on to add (helplessly) that she was not impressed by the way in which her own dreams had so sadly failed to come true. Exactly how could she know whether they came true or not, if she has yet to finish the dream, I asked. (I think, rather cleverly.)
The effect of my saying this, however, was to alter the pitch of her voice to a husky, but still very feminine bass, not unlike the gentle rumblings of Vesuvius giving the good burghers of Pompeii their second warning that a rapid exodus might be a wise career move. This was captivating stuff, though a little ominous, so I hastily informed her that I would do anything – anything – in my power to make her dreams come true. Anything.
I see, she said. Well, it may interest you to know that a dream that I often have is that I arrive at my exam completely unprepared and horribly confused. The exam from your Epistemology module, as it happens. Is that a dream of mine that you intend to ensure comes true?
Unprepared and horribly confused is exactly how I too felt when confronted with this rather new line of questioning. I told her that.
I haven’t finished yet, she said with the cool professionalism of the experienced cross-examiner. As I was saying before I was so rudely interrupted (she continued), I was not only confused and completely unprepared, having not had time to do any revision; but I had also not had time to dress properly, having overslept my alarm. All I could find was my great-grandmother’s rather floppy old hat adorned with pink flowers. Yes, an early 20th century floral hat. Exactly how do you think I felt about having to wear only that in the Examination Halls? Well? (She added this last word rather sharply when I merely gaped at her.)
Well, indeed, I eventually replied, having given the matter some thought. Hats can be problematic. In my day, you were required to wear what they still call a mortar board, something worn even now by students on both sides of the Atlantic on their graduation day. They (the hats) have a weirdly rigid square design, are hopelessly impractical especially when it comes to keeping them on your head, but they went well with what was called ‘sub-fusc’ at my own alma mater: dark suit, white shirt with a white bow tie, black shoes and socks (and underwear of any colour, of course – prescriptiveness only goes so far). And, of course, the gown.
I vaguely seem to remember that the ladies wore something similar, only more feminine.
The rumbling sound grew more and more threatening, however, and I was then told, with a hint of impatience, that when she said that she was only wearing the hat in question, she did not mean that the only hat she was wearing was the one in question; rather, she meant that the only thing that she was wearing at all was the hat in question. Now, she continued. Take a deep breath, and think carefully about what you are imagining when you say that you would most sincerely like this dream to come true.
Well, to be quite honest, I did not find this task to be unduly demanding, although she evidently thought it would be. I am not uninterested in female apparel – far from it – and have even been known to study it closely and comment approving about it. However, if there is little or no apparel there to begin with, there is little to study and, in consequence, not a lot to say. I told her that.
I, at last, felt that I was winning the argument; for, in response to my observations, there was a sigh of relief from the other students in the room, and even a tinkle of laughter. My nemesis (I now thought of her as Fidgety Flora in my own private language, though I dare say that that is not her real name), however, was clearly of the modern, so-called feminist persuasion. For she went on at me at some length about male arrogance and the degradation of women – and so on, and so forth. At fortissimo, I might add.
I had to admit a sneaking admiration for this majestic level of assertiveness, and was reminded of those Teutonic ladies on horseback with their long golden hair only partially occluded by the curvy winged helmets (more Saxon than Viking, I believe) that seem to suit them rather well. That, and the rousing music (plus energetic contralto voices full of passionate vibrato and wild upward-swooping phrases) that goes with it. I don’t get to see a lot of opera these days, but I keep meaning to.
Fidgety Flora, however, was clearly in no mood for compromise. She screamed that her valuable three undergraduate years had been completely ruined by having studied a freaking rubbish subject (that word again); and that, despite being flat broke, had been duped into taking a Masters in the very same subject with a view to becoming a doctoral student, and then a lecturer (and even a professor).
If you’re so smart, how come you ain’t rich? was the terrible mantra that she bellowed at me, in an accent that I had thought that only Californians could manage. I realized that she was talking about me, not her, and I confess that, conscious that my own retirement was only a matter of time, this rather jolted me out of my newly-won complacency. I hastened to add that we had a new module on the philosophy of economics under consideration, and that it should not be assumed that money is everything (it isn’t, I might add – trust me, I’m a metaphysician).
However, all I got in reply was a very sad look, an abrupt about-turn (as they say in military circles) and a majestic march towards to the glass-fronted door that was the only exit from our rather dilapidated but cosy philosophy lecture building. I watched her go, with a great many feelings all jostling for my primary attention.
When she slammed the door behind her, the chandeliers all shook (or would have done, had we not pawned them all to find money to help fund our poorer part-time postgraduate students – who, needless to say, are ineligible for government loans); but the dust soon started to settle again, as dust typically does. The last I saw of Fidgety Flora (through the glass-fronted door) was her striding purposefully towards the newly constructed (and, I believe) highly expensive set of buildings where they teach what, I gather, are nowadays called STEM subjects.
1.1.11 Burgundy versus claret: a critical reappraisal [back]
Well, life must go on; and I still had a few more minutes to go to explain to our remaining new first-year intake what the advantages of studying philosophy (at my department, and at my university) might be. I duly did so.
That is to say, I explained about our first-year year-long modules, known (not very informatively) as PHIL 100 and PHIL 101. The former is about Knowledge and Reality and is (or was) compulsory for all Philosophy undergraduates (including those who study what is universally known as PPE). The latter is about Moral and Political Philosophy and therefore at the less rigorous, more touchy-feely end of the subject. It is not available to PPE students, as far as I can remember, even though PPE stands for Politics, Philosophy and Economics. The point is that only three 120-credit (i.e., year-long) modules can be studied at Part I (i.e., the first year, assuming you have a normal pattern of study), and they (the students) have to do their Politics and Economics introductory modules as well.
All this is fairly routine, elementary stuff, of course, and a far cry from the more profound ruminations that I had given them earlier about the nature of philosophy itself; and I could see this reflected in the increasingly glazed looks that appeared on those faces in the audience that I could still actually see. Still, it was Friday – and the hour of luncheon was fast approaching – and the poor kids (as we sometimes affectionately call them behind their backs) had been overloaded with information all week. For most of them, it was the first time that they had lived away from home, and their parents had had to have it explained to them (by their children, along with a few University employees that we pay primarily for this purpose) that, although there were indeed many bars on campus – transatlantic readers may be interested to know that the legal age for alcohol consumption in the UK is 18, not 21 – there were also a few laundromats as well. And communal kitchen areas in the student flats, which are what Americans refer to as dormitories, a term with a rather different meaning to the shared sleeping arrangements to be found at British independent schools, as we call them. I mention that in passing.
My audience was now getting very restive, however – although still relatively obedient to my occasional schoolmasterly glares, so I was able to end the session with a few well-timed jokes about young and inexperienced Wimbledon players. But only briefly: for my door was then rudely opened from the outside by a colleague from another department (with whom I had never really got on) who informed me, in case I had forgotten, that these sessions were meant to end at 10 minutes before the hour, not 10 minutes after. Well, fancy that, I murmured, and watched in amazement at how my students (going out through the door) and hers (coming in through the door) seemed to lack any kind of disciplined coordination. Anyway, I gave my colleague a winning, though not entirely sincere smile as I squeezed past her, and she replied in kind. I think.
Now out in the fresh air again (and, for once, it was not raining), I was able to contemplate seriously how I was going to feed myself.
It is often said that the people who do the actual teaching at what we call Oxbridge (not to be confused with Uxbridge – there is no university there) are so grateful for being able to do what they do where they do it, that they do not accept anything so vulgar as money as remuneration. Instead, they are fed on 5* star restaurant-quality food at what is called High Table (you do not need what old-fashioned nannies call High Chairs to reach it, by the way), and may – crucially consume large amounts of claret that would retail at a small fortune should it ever be sold on the open market. Together with the privilege of working in ancient and therefore beautiful (if draughty) buildings, and alongside brilliant scholars with a similar background to the very best and the brightest (whoever they are), and who represent the crème de la crème, the ultra-elites that keep the world turning around on its invisible axis.
Now, I have always felt that claret is a bit heavy for serious consumption in the middle of the day, but I knew of a rather good, if (understandably, somewhat discreet) eatery on campus where senior managers (vice chancellors, and the like) were rumoured to satisfy their voracious appetites. As occasionally happens, when I feel in the mood, I go there and order from their small but excellent menu, and inquire whether I could have a glass (or two) of their admirable lunchtime burgundy.
Just how does a pint of burgundy differ from a pint of claret, I hear you ask (in a rather coarse tone of voice)? Is it primarily a difference in colour? Well, since you ask, I have decided that the pinot noir grape has an edgy quality most suitable for daytime consumption: when trodden on by the bare feet of pied noir grape-pickers, it is especially delectable – as I confidently informed a young and newly appointed Algerian colleague of mine. I am not entirely sure that she was impressed by this newly acquired knowledge, however; and she may even have blushed slightly, but her dusky skin colour made this difficult to determine accurately.
I suppose that I should have guessed that Fridays might be a problem and, sure enough, there was, indeed, the case.
Why? I was, fortunately, not all that late for lunch, but I had had to walk almost all the way up (what is rather creepily called) the Spine (i.e., that lengthy pedestrian-and-wheelchair-only path that links the south end of the campus to the north). This took some time, and my face was rather red as a consequence; and my gait, likewise, was a little unsteady (the result of a skiing accident some years earlier). In short, I was really looking forward to my lunch. And at the eatery of my choice.
Annoyingly, however, I found that my favourite waitress was not on duty when I finally arrived there. Instead, I was received by a gentleman of impressive gravitas (and bulk) who informed me, through clenched teeth, that the rules had not changed; and that lunch could only be purchased here by senior members of staff who also book appointments well in advance. He went on to add that there was a perfectly good eatery next door, albeit one where alcohol could neither be purchased nor consumed.
He also added, as an aside, that the kitchens of both eateries were now closed; but that cold snacks and even colder drinks could still be purchased at the other eatery – albeit only by debit or credit card (the use of cash on campus disappeared with the Covid 19 pandemic, never to return, it would seem).
Well, one must be grateful for small mercies, and I have to admit that a prawn mayonnaise sandwich together with a plastic bottle of diet cola actually does me perfectly well. Indeed, a short lunch-break (eaten at my desk) is often my preferred option – if only because I then do not have to actually be on campus as often as some of my more heavily unionised colleagues tell the boss is normal practice. I thought of adding a double espresso to my tray, but noticed (out of the corner of my eye) a rather disturbing spectacle.
1.1.12 California dreaming [back]
I may, of course, have simply been imagining things; but I could have sworn that I saw, at a nearby table, the Californian I alluded to earlier (the one with a rather vulgar attitude towards material wealth, you may recall) having an intimate tête-à-tête with what looked the dreaded Fidgety Flora. I could not be sure of her identity (as I mentioned, students do tend to look alike), but the young lady in question glanced in my direction, gave a sudden shriek that reverberated around the whole of the north end of the campus, and then ran hysterically through the (luckily) still open glass door of the eatery towards the nearest taxi rank. The Californian (at least, I think he said he was from America’s west coast, though it is possible that he meant the Gulf coast of Florida: Americans can be very imprecise about their enormous geography, and Floridians rather parochial) then looked meaningfully at me, and drawled that I perhaps would need something a little stronger than a double espresso to see me through the rest of the day.
I was later to discover that what he said was more a declaration of war than a polite suggestion; but I was in no mood then for detailed analysis of other people’s speech-acts (as we call them), and I simply edged towards the door – though more calmly and with considerably more finesse than whomever it was I saw exit the building a few seconds earlier. Alas, the degree of finesse was not sufficient, and I just found that he (the Californian) had just somehow manoeuvred himself between me and the door. I thus found myself in further conversation.
Now, I am not easily intimidated, not even by my line manager (for it was he), and I simply asked him whether the stronger option to which he was referring was what subjects of His Majesty from north of the border call a large whisky, and what Americans call ‘a Scotch’, lest they confuse the liquid in question with their own peculiar beverages. Apparently, that is not what he meant, however, and he suggested instead that I see him in his office first thing on Monday morning. I therefore had the whole weekend in which to consider my position, as he put it.
1.1.13 My last ever office hours?
Well, well, well, as the song goes: but that was not the only song I had in my heart when I finally made it upstairs to the extremely small personal office in my departmental corridor that I sometimes call home. I should add that I now had, for the next two hours, what we call Office Hours. As the phrase suggests, this is the time that, each week during term time, we can be guaranteed to be both visible (we have glass-fronted doors for Health & Safety reasons) and also available, particularly for any student that had not already booked an appointment.
The sad reality is that hardly any students take advantage of this highly standardized feature of academic life. This is silly, for it is much easier to give useful feedback from essays (and even exams) if there is a real two-way conversation between student and tutor of the sort that I remember from my own intense (but extremely hard-to-finance) undergraduate days. I nevertheless am realistic about these things, and resigned myself to being alone in my room until I departed for home at around 4.30 to 5.00 p.m. I therefore studied my computer to see what was new, the way everybody does these days.
Well, what was new was yet another email from a member of the PS staff (remember them?) asking me when she could finally expect to receive the grades from the September re-sit examinations. I gave a sigh of exasperation; but had to admit, ruefully, that I had indeed, promised to attend to the matter quite some little while ago. So, I logged into Moodle and searched for the relevant material.
For the benefit of the ignorant, I should explain that Moodle is not something you eat with chicken broth, though many of us wish it were. It is, rather, the name we give to what is called a Virtual Learning Environment. It is, in short, a sort of digital place where everything we need to know about is organized online (as we say): module handbooks, lecture recordings and whatever. And also (thanks, once again, to the baleful influence of the Covid 19 pandemic) all of our students’ assessments, both coursework and exam answers. Gone are the days of illegible (and poorly spelt) paper copies of exams (easily lost in transit between office and study – or wherever your partner lets you work at home), by the way, so perhaps we should be grateful for small mercies here). Gone, likewise, are the hysterical crossings out with red biro (remember biros?) and equally illegible tutors’ comments.
Instead, we now have clean and efficient online essays – and exam answers – which cannot be lost or otherwise manipulated. They are just there; and the question of how many of them have actually been marked (or graded, as we say: more on the difference later) is information of considerable importance to the ever-watchful PS staff. Some talk of panopticons and Big Brother (though not of Big Sister, as far as I am aware) in this context, but I shall reserve talk about that sort of thing for much later in this book.
So, to work. I wonder sometimes, on reading what students actually write, just whether they can have really attended the classes I teach. Could it be that they were taught something quite different by a quite different tutor (arguably, from a different species altogether) from a parallel universe that somehow got entangled with this one? As an inference to the best explanation (to use a technical term used in the philosophy of science – remember that?), it has a lot going for it. I also find that my mind tends to wander a bit, particularly when considering the relative virtues of proper names and definite descriptions, which (as it happened) was what the student had attempted to write about (it was a ‘Logic and Language’ exam). I have already mentioned my aversion to the use of proper names, and the alert reader will have noted that there are hardly any to be found in my prose – at least, so far.
Why? Descriptions at least have content, though there is always the risk that more than one entity satisfies the given description (or cluster of descriptions) used in the (often, heroic) attempt to steer the reader’s or listener’s mind in the right direction. I have already mentioned the name of Moodle, itself not very descriptive, it must be said. I preferred my university’s earlier name, LUVLE: a more evocative term, which does indeed sound lovely, and is an acronym (the UVLE part just stands for University Virtual Learning Environment). I guess that somebody used their imagination here, for once.
But we are stuck with Moodle. The reason is that every other (or, at least, most other) universities and colleges like to use this same system. Apparently, or so others repeatedly tell me, there are considerable advantages to doing things in this way – i.e., the samey same way; and, likewise, to use only words and phrases to which everyone in the relevant society gives the same meaning. Private meanings, like private ways of living our own lives, are just so yesterday.
I notice that I am still stuck on the second paragraph of the first (of my many) examination answers. I was particularly struck by what appeared to be a word spelled K-R-U-P-K-E. Now, I just happen to know (because I have an excellent memory) that this word spells the name of a fictional character, an officer of the law in that most engaging musical called West Side Story, and there is even a most amusing song about him (you are probably still reading me online, so go to YouTube if you want to hear it: you should).
I see that I have inadvertently used the name of a musical show, albeit one inspired by the Bard himself, no less, instead of simply giving a description that uniquely satisfies it. Now, panic sets in, for I do not know how to eliminate absolutely all proper names, particularly if they are names not merely of people but of entities of any logical category that can be named. There now follows a brief digression while I explore this issue.
What about the word that sounds like /green/, for example? It is certainly a common English surname, but also (it seems) the name of a colour. Must it too be committed to the flames, lest our philosophy of language go haywire? Some think, without having thought too much about it, that every meaningful word is a name. That is just what it is for a word to be meaningful – it simply refers to whatever it is that the word is about. (Their dogs all answer to the name of ‘Fido’, by the way.)
If so, the elimination of all proper names would be very easy: just say (and write) absolutely nothing at all. For ever more – for once a linguistic habit is lost, it is not easily recovered. Ideas will just float about unharnessed to anything visibly outside the mind. And do we want that? At the risk to saying what you, dear reader, already know (for you will have read the blurb on the back of this book, as well as the outrageously complimentary reviews from various impressive people), I actually want it a great deal, and I suspect that you do too. But neither you nor I can really know this – yet – for the book is still in the process of being written. (OMG, as they say in the social media.)
I see that incoherence has set in, a natural consequence of trying to get into the mind of the author of most students’ examination answers. One tries – that is what one is paid to do – but it is an uphill battle. Publishers (and, indeed, their commissioning editors) should be more sympathetic to the plight of would-be communicators of something so original that it can only barely be thought about, let alone said (or written about). Or so – I say. And, so also, they (the students) might say.
Eurghh! Sometimes, I feel that I have finally cracked the psychology of student authors. They have all been influenced by the Californian to the extent that they feel that their destiny is to write, not the perfect examination answer, but the Great American Novel, albeit a shortened version. The literary style is certainly required to be very different from what is conventionally expected. An alternative theory is that we have – instead – the dreaded Irish influence, and the feeling that the best way to translate our immediately visible flow of ideas into printed form is by means of that most elusive of genres, the Stream of Consciousness Novel. I said that I would not mention names, so I shall not mention James Joyce’s final outrage, known universally as Finnegans Wake. There, I failed to say it, as you probably noticed.
Their tutors, however, retain mixed feelings about such ambitious get-up-and-go. It is they, after all, who have to read the wretched stuff, their parents having given up some time ago.
1.1.14 Homeward bound [back]
It was clear that I was getting nowhere fast, and I do, after all, have the ability to access Moodle from home; so, I decided that I had had enough excitement for one day, and resolved to make my way home. I thus put on my practical (though not especially smart) raincoat (I can shove my woolly hat into one pocket and my plastic supermarket bag into the other), looked briefly around my office, smiled a smile of relief, and turned off the light (unnecessarily, as it happened, since the lights are all mysteriously heat-sensitive).
Now that I was definitely en route away from the office, I perked up quite a bit and made my way confidently down the corridor towards the rather obscure staircase that led to an exit that few knew about, and which in turn led to where I usually park. I looked instinctively left and right as I did so, simply to see which of my academic colleagues were still in on a late Friday afternoon. A few were, but not many.
I finally reached the equally small-sized offices at the centre of the main corridor, namely those inhabited by our estimable PS staff and moved rather more cautiously, I blush to confess. Some offices I moved past very quickly and silently indeed; others were treated by me more as alternative home from homes, and I greeted their inhabitants fulsomely and most affectionately. They too smiled sweetly at me, and expressed their surprise (and even shock) at my ability to stay at the office for as long as I did. Yes, indeed, I replied; now have a really, really nice weekend. They would, they assured me.
Along to the very end of the side corridor, and down the stairs, having finally remembered which of the doors at the top I needed to pull, and then out into the fresh air again, though I notice that it had now started to rain. It happens in this part of the country. Walking over the grass towards my vehicle, I am pleased to notice that nobody saw fit to issue me with a parking ticket, despite my parking in an area reserved either for guests or for staff who share their vehicles with other colleagues (they pay slightly more than the singletons, and get given a differently coloured sticker, or so I gather).
Our perimeter road is quite attractive, with lots of autumnal trees, and has a sensible 20 mph limit with plenty of speed bumps. Indeed, the whole campus is not only a geographically very pleasant place to live and work, it was also ranked as the safest campus in the UK for many years. I used to comment on this when talking to applicants’ parents when they came to our Open Days. This was back in the day when they still gave me the responsibility to do such things. Then along the surreally winding route down the hill, past the duck-pond, to its terminus at the main north-south A6 road.
Now, I usually turn left here to go away from town towards the nearest motorway junction southbound, but occasionally I turn right and go towards town. The city itself is actually quite attractive, though with a castle rather than a large church (or Minster) to which its historical rival a few dozen miles to the east it is attached. We don’t talk much about it (the rival), and mention of what is called the Russell group of UK universities does not impress us. After all, we are the ones who excel in the league tables, and the name ‘Russell’, in this rather feeble context, merely denotes a hotel in the nation’s capital rather than the author of a rather obscure paper about the difference between proper names and definite descriptions (yes, them again).
As I say, I go towards the city centre, but do not arrive. Instead, I reach a rather nice supermarket which conveniently sells me many of the provisions I need to eat for dinner (I do most of the cooking in my household, if only because I rather enjoy it). Once stocked up, I make my way southwards down the A6 again, past the entrance to the University, and on through a rather eccentric village (with a fearful set of traffic lights), down to the junction with the M6, the main motorway which ends up in Glasgow if you go north, and in London if you go south (though the name of the motorway changes every now and then according to unfathomable rules).
I then turn left and go on around a weird slip-road with a full 270–degree turn, and finally accelerate confidently onto the motorway. Hooray!
The journey home lasts about 40 to 45 minutes, and tends to be uneventful. True, I have been known to fall asleep at the wheel for a moment or two, a somewhat hazardous activity. I have also been known to miss the crucial turn-off when the M61 (to Manchester and Leeds) diverges from the main M6 (to Birmingham and Liverpool). As a Londoner by origin, I regard these great English cities as essentially northern, and am regularly amazed by road signs that say that we go towards the (capitalized) SOUTH in order to get to them from work. In consequence, I rarely go wrong when I attempt simply to go home, as I really need to concentrate on where I am.
Eventually, I reach my bijou little cottage, my real home, unpack my shopping and then retire to the rather elegant pub next door where I consume a well-earned glass of Chardonnay. Or two, if I am in the mood. And then reflect, gently, on what I have managed to achieve that day.
***
The day had certainly been busy. There had been, as I have already related, those rather interesting plagiarism cases first thing, followed by the student induction meeting. Lunch, a rather odd meeting with a Californian, and then some rather fruitless attempts to grade papers during an otherwise uneventful set of office hours.
All rather routine stuff. Except, of course, for her. That woman.
I thought back at what she had said about the subject I love and try to teach. It seemed to me then, and even more so now that I could reflect on it from a distance, that her strictures were unfounded and her personal criticisms of my style of teaching quite brutally unfair. I should add that I only gave a brief summary of her tirade, and that she had actually said a great deal more than I related earlier, though you have the gist. Her main point seemed to be that my very traditional way of doing philosophy was no longer of any relevance, and that radical re-thinking was needed if the subject were to survive into the 21st century – and pay its own way.
She had, for example, trashed my conception of epistemology (that is, you will remember, the study of the nature and limits of knowledge – in particular, the human knowledge that we allegedly have here and now). We should (she insisted) not be interested in whether we are constantly being deceived by evil bodiless demons who are adept at hiding their tracks, but rather whether our entire minds are being controlled and manipulated by evil embodied billionaires who literally own most of the media – and in plain sight. Everything, she averred, is political; and everything is gendered and/or sexualized in the sense that subjugation by men of women distorts and corrupts everything we try to say about the world around us. Likewise, the oppression by wealthy white people of virtually everyone else on the planet is so all-pervasive as to be virtually unnoticeable, alas, even to the critical thinker (I think she meant me).
Contemporary moral and political philosophy is also pretty useless (I was informed), as it concerns only dry abstractions and not the very real ethical and political dilemmas that regularly confront us. And why (to repeat) is it always about dead, white European males? Are they really the only people who ever had ideas worth thinking about? We may, in our snooty sort of way, decry what used to be called obscurantism, and support critical thinking, but if our opponents are self-declared chuckle-heads (I don’t think she actually used that phrase, by the way), how can we convince them of the error of their ways without simply begging the question (as you logicians phrase it)? Or by just beating them hard with their own big sticks?
So, what is the alternative to what we ourselves feel to be rational argument, I remember having asked her, having finally plucked up the courage to get a word in edgeways? What, indeed, she said: but gave no further reply. She just looked at me as if to say that it was my job, not hers, to look for such alternatives.
Well, mine host also knew how to get a word in edgeways; and he asked me, very politely, whether I by any chance wanted another drink – or anything else, for that matter. I got the hint, and departed for my home next door, having said my equally polite goodbyes to the local barmaid.
1.1.15 Dulce domum [back]
At last, I could concentrate on preparing dinner – as always, rather later than originally intended. I find that the practical business of turning (some of my) shopping into something that can safely be ingested acts to be a useful corrective to the rather excited flow of ideas that a hard day’s work can stimulate. I don’t necessarily say much out loud when I am in the kitchen, I should add, but that does not mean that my mind is inactive. I could and did, for example, reflect on the student exam answer with which I had so hopelessly failed to engage effectively. I did not, of course, remember the details, but I did wonder once again exactly what literary genre its author had been attempting to emulate.
The art of essay-writing is not a skill that most of us are born with. Exactly how you teach a skill to someone unless they already have the (itself, untaught) aptitude to learn it is a very deep and ancient problem, and some have declared that it is just impossible. If they can write essays at all, then they must have first acquired the ability in a previous incarnation, one where the teachers were probably better remunerated and did not have to work in caves. Or so I sometimes feel. I am more cheerful and optimistic than this for the most part, however, and often argue that if you can beat a sword into a ploughshare without its simply ceasing to exist as a consequence, then you can probably transfigure a sow’s ear into a silken purse, whatever the general public may think to the contrary.
I often camouflage what I am really getting at when I talk to my metaphysics students about this, by phrasing it in terms of the traditional distinction between a qualitative and a substantial change; but I shall not discuss that now as it is a somewhat abstract topic which requires, for its proper understanding, a grasp of what is technically known as modal logic (look it up, if you must know what that is). Moreover, I see (or, rather, smell) that I have nearly allowed the oven chips to burn – yet again – so great was my concentration on otherworldly matters, so I shall desist (for the moment).
My dining room is a bit cramped, so I hardly ever use it to entertain guests. It also opens directly into the world outside (which is a bit odd), and is separated from the kitchen by the main living room (i.e., the room with the television – what, in days gone by, used to be called the parlour), so is not especially practical, especially since it (the dining room) contains an upright, and seldom played, piano. I have become used to it, however, and am pleased that my humble abode has what estate agents refer to as ‘character’.
However, I don’t suppose that you, dear reader, are unduly interested in these fine details; and even less in precisely how a humble ribeye steak manages to transfigure itself into a part of me, fascinating though that is to talk about (as many a dinner hostess has kindly explained to me). I could say more, however, about the wine which I typically purchase from a society of which I am a distinguished member, and perfectly accompanies the transfiguration in question. Neither expensive burgundy nor claret, as a rule – nor even the rather nice Chianti that I keep in my wine cellar that might one day be consumed with fava beans and a line manager or two (if the zeugma may be excused) – it usually comes from one of various countries in the southern hemisphere. Exactly why, I don’t know; but not even professional philosophers are required to have an explanation for everything. It tastes good, however, and that is all that matters.
One course is usually enough me and, admirably refreshed, I now return to my sofa and watch the box, as they used to call it. Anything in particular, I hear you ask wearily (for it is indeed getting quite late)? Well, I confess that I am a bit sad in this respect, and find that the 24-hour news channels rather addictive. Seeing the same horrors over and over again is not to everybody’s taste, to be sure, and there comes a point in the night when finding the TV remote and pressing a button or two becomes inordinately tiring. Still, it is useful for when doing moral philosophy to find actual examples of outrageous behaviour that are even more improbable than that suggested by the thought-experiments that we philosophers so much rely on in order to develop and justify our intuitions (as we like to call them).
But one must maintain a good work/life balance, must one not? So, let us leave the Middle East to those in my department who actually know about such things (if only because that is where they actually come from).
1.1.16 Night, all! [back]
Anyway, the other things I like to watch, on my Freeview TV, particularly late at night, are the non-stop dance music channels which would be a complete delight were it not for the fact that most of their time is devoted to ads – and always the same ads over and over again (and they are synchronized, in the sense that you cannot escape them by channel-hopping, a practice which I have always felt to be a bit below the belt). Still, the artistes are often worth looking at, and I find that I can decompress quite effectively by studying their contribution to contemporary popular culture.
What happens at the very, very end of my conscious day is, however, not fit for the public gaze, so my many various experiences, not least of all those concerning the antics of a few fidgety floras and their dancing companions, shall be passed over in tasteful silence. Instead, I shall merely inform you that I closed the deal (as they say), and shall now climb the wooden hill to Bedfordshire, as people from an earlier generation used to put it; and then sleep the sleep of the just. Which I duly did. [back]
1.2 Saturday Morning
1.2.1 The Ting Tings
I don’t get out much these days, and I have to admit that many of the events that I alluded to in the previous chapter did not happen here at all, but rather in a nearby possible world. You will now naturally want to know what I mean by this. What worlds could be possible other than the actual one? And where are they?
Well, we are often assured, for example by a man whose name sounds a bit like that of the police sergeant from New York’s West Side (whom I mentioned in 1.1.13), that a possible world is not like a distant star system, a place out there (as they say) that we could, in principle, go and visit – and then investigate. It is, rather just a way that things might have been. For example, my coin landed heads when I tossed it just now (never mind why I did it). But things might have been different: it could, instead, have landed tails (it was, after all, a fair coin). So, there is a possible world which is very similar to ours except that the coin landed tails. Since the coin cannot land both heads and tails (at the same time), it follows that that world and our world are different. Since everything else is very much the same, we regard this world as being nearby (and suppose that the whole panoply of different possible worlds forms what mathematicians call a multi-dimensional metrizable space).
However, to say more at this stage would be to engage in the rather hard discipline called modal metaphysics (look it up), or even in the even harder and yet-to-be-invented discipline of modal epistemology (just don’t ask). I am still slightly in shock at the limitations set by the Californian (remember him?) about what I can safely write about: autobiographical facts, perhaps, but original research, definitely no. Just no.
As for names, well I have already indicated that we might have a problem here; and a musical interlude might help to illustrate some of the difficulties. The song I have in mind for you all is distinctive if only because it appears to have no name – at least, that is what its name says. The lead singer is a rather sassy blonde who appeared on a television programme a few years back which was hosted by someone’s whose name sounds like ‘jewels’ and whose show is called a ‘hootenanny’. I myself had always imagined that the latter was a sort of nurse-for-the-nose that you might need if you pass too many adverse comments about other people’s girl-friends; but I am assured otherwise. Anyway, YouTube again.
1.2.2 The passing of the possible
Modal logic (which I have already mentioned, though only briefly) is that branch of logic which concerns what is necessary and what is possible. Ordinary, nonmodal logic, by contrast, merely concerns itself with the actual – what is either actually true, or else what is actually false. I shall now break one of my cardinal rules now, and mention the name of one of the great philosophers of the past, namely one Gottfried Leibniz (1646-1716). A Hanoverian diplomat, he was also one of the greatest mathematicians and philosophers who ever lived (as well as being an expert in just about everything else that late 17th century thinkers could offer). It is he who first coined the phrase, ‘the best of all possible worlds’, and he was hammered by a French near-contemporary of his for supposing that that is what God has created for us.
Possible worlds are relevant to modal logic because we define a necessary truth to be one which is true in all possible worlds, and a possible truth to be one which is true in some possible worlds.
So where are they, these possible worlds? It seems everywhere, for modality (that is to say, necessity and possibility) infuses everything. Merely to describe a stick as hard, for example, is to talk not only just what it does, but also what it would have done if things had been slightly different (which is what goes on in certain nearby possible worlds). For example, if you were to have hit someone with it (luckily for you, you didn’t).
To put it brutally, without an understanding of modality, you cannot even grasp the handle of a stick in your hand safely, let alone beat the other guy with the hard end of it – and understand what is going on when you are doing it. Even non-human animals, your cat (for example) must have some grasp of what would happen if it were to do something other than what it did: fail to run away from the approaching dog, for example.
Note that the italicized modal verbs (as grammarians call them) in the preceding sentence represent the subjunctive in English (a somewhat obscure mood, as it is called, compared to what we find in Romance languages, such as Latin): and the subjunctive concerns what is doubtful where the normal indicative concerns what is definite; with what might or might have been, versus what actually is.
Now, you might think that the subjunctive mood is something of a luxury, a latecomer to ordinary human thought. You would be wrong. Without the subjunctive, we cannot even begin to grasp a causal connection; for to say that X caused Y is to say (roughly speaking) that if X had not happened, then Y would not have. Sentences beginning with an IF are called conditionals, and they are often subjunctive in character. If you see the world as just a causally unregulated mass of events or states of affairs, with no more organic unity connecting the parts than may be found in a pile of bricks, then you are not going to go far. Indeed, some would say that any kind of coherent thought would itself be strangled at birth. You could just not function unless you can see subjunctively – and the world that you see cooperates.
Now, experts will notice that a great many subtle distinctions are being rammed together here, distinctions that need to be carefully disentanged if we are to avoid making some huge and serious mistakes. However, this part of the book is still called the ‘Overture’, and with a reason. I am painting with a broad brush, and merely introducing themes that will be developed at greater length within the main body of the work. The general picture needs to be seen holistically before we put its components under the microscope. What we need to learn here is simply that modality is crucial to a general understanding of the world, and therefore demands our very close attention.
The phrase, ‘the passing of the possible’, is also the title of a work on counterfactuality, as it is called, that is not as influential as it once was. Its author, an American philosopher called Nelson Goodman (1906–1998) was deeply suspicious of the concept of similarity, and would not have been unduly impressed by the later (and more influential) analysis by another American philosopher by the name of David Lewis (1941-2001), which defines the crucial notion of a nearby possible world in terms of world which are most similar to ours (given certain specific, fixed differences). For example, Lewis considers the famous counterfactual conditional: if kangaroos had had no tails, they would have fallen over. He claims that it means that in any situation (i.e., possible world) which is as similar to ours as is consistent with kangaroos’ having no tails, the kangaroos fall over.
Okay, so there are possible worlds where kangaroos have no tails but still do not topple over: where they have wings and fly around, for example. But such worlds are too dissimilar to ours to count.
Goodman, by contrast, prefers to understand counterfactuals in terms of partial deductions, in the following sense. We may deduce that kangaroos will fall over from the (as it happens, false) premise that they have no tails, provided that we add as extra premises the laws of nature (the law of gravity, for example), AND some assumptions about what are sometimes called the ‘attendant circumstances’. In this case, the fact that they do not have wings would count as an attendant circumstance. Now, for reasons that I shall make plainer later on, it is hard to know what is going to count as attendant circumstances, since some of them must change if things were to happen differently (and the whole point about counterfactuals is that things which are – definitely – true would cease to be so in the alternative possible worlds). What we keep fixed in our imaginative reconstruction of what would happen if (such-and-such) depends not only on the such-and-such, but on the surroundings; and whether something counts as a part of the surroundings seem to depend on context: and people will disagree about what the context might be. (It will not be set in stone.)
We shall see, in fact, that the real mystery here is not why it is sometimes hard to decide whether a statement of the form, if X had been true then Y would also have been, is true; but, rather, why it is not always hard, given the flexibility of context.
1.2.3 Epistemic and metaphysical possibility
One more distinction. There is a difference between saying that something might be true and that it might have been true. You might think that the difference is one simply of tense, but in fact it is one of what grammarians call aspect. More precisely, we have the distinction between what is called epistemic possibility and what is called metaphysical possibility. Thus, consider an example. Suppose you give a talk and you find that fewer people than you hoped actually turn up to listen. You might later say, in sorrow, that there might have been 20 people present. What do you mean?
Well, it could mean something like: for all you know (you did not bother to count), there might have been only a measly 20 people present (you had hoped for at least 30). This is an epistemic possibility; it concerns a limitation in what you know.
But it could instead mean that 20 people could have come, indeed promised to come, and there were ample seats available for them to sit on – and yet they did not come. You know this, because you know, having carefully counted, that there were only 12 people present (including yourself). This is a metaphysical possibility: it has nothing to do with what you know, but rather with what goes on in nearby possible worlds.
Usually, it is clear from the context which sense is intended. Thus, if I inform you that you might have been killed, when you narrowly miss getting hit by a speeding car, I am not expressing doubts as to whether you are still alive. The possibility in question is metaphysical. On the other hand, if I say that the table at which I am working might have been made of plastic (with a wooden veneer), I am simply expressing my ignorance. The possibility is epistemic. That actual solid wood could not have been plastic in any metaphysical sense: that would be a chemical impossibility.
This distinction can be made to seem obvious, but it is not. Until Saul Kripke (1940–2022) – yet another American philosopher, and, at last, to give him his correct name! – drew it forcefully to our attention in the 1960s, epistemology and metaphysics were not carefully distinguished.
A central question of this book is whether the prevailing Kripkean orthodoxy is sustainable. It can be made to look obvious nowadays, but for centuries past it was thought otherwise. The most important figure on the other side of the debate is another great German philosopher of the past, one Immanuel Kant (1724–1804), about whom a lot more to come. Not a Hanoverian diplomat like Leibniz, but rather a Prussian academic, he was strongly influenced by the German metaphysical tradition that Leibniz and his disciples had formed. (He himself also considered himself to be influenced, in addition, by a highly significant Scottish anti-metaphysical thinker, but I shall defer discussion of that possibility until later.) At the moment, it is enough to note that Kant’s main metaphysical task was to see to what extent we could prove to be true one of Leibniz’s most famous theses, known as the Principle of Sufficient Reason. This states, to put it baldly, that there must always be a reason why a thing is the way it is. Nothing can be, or happen, for no reason at all. We may not know what the reason is (and probably have no urge to find out), but reason there must be.
A related principle is that called determinism: roughly, that every event must have a cause. Causation again, you notice.
The critical word used here is reason. And before proceeding further, we need to know more about what that word is supposed to mean, especially if we talk about what we can know from reason alone. We need to talk that way, if only to do justice to the whole epistemological tradition of the West from the 17th century onwards, a tradition which some 20th and 21st century philosophers regard as an abomination, but which should not, in my opinion, be ignored lest we obliterate too much of our own history. However, although Kant uses the term reason at lot, he more often talks of the understanding (which is subtly different); but, most importantly of all, we have the adjective (and adverb) a priori. This term has become standard in Western philosophy, both in the English-speaking world and also on the European continent. What, exactly, does it mean?
1.2.4 Necessity, universality and the a priori
I shall defer a more precise analysis until later; but the gist of it is that we say that we know something a priori just in case we know it independently of experience – by which I mean any kind of sensory observation. Kant himself considers the case of a man who thinks that he knows a priori that his house will collapse (since he himself has undermined its foundations). It is a priori in the sense that the man does not actually have to look at the house in order to know that a collapse is imminent. However, it does not count as a priori knowledge in the relevant sense. He also has to know, among other things, that unsupported objects tend to fall; and we only know that through observation.
Some equate the a priori with the innate, with what we are programmed from birth to believe, but that is also contentious. We are not programmed with advanced mathematical knowledge in place, yet the latter is usually thought of as paradigmatically a priori: departments of mathematics do not have laboratories where they perform necessary experiments. Likewise, and conversely, even if we did have innate beliefs about the dates of the kings and queens of England, for example (perhaps due to some weird sort of genetic engineering), we should still need to consult archives and such like to determine if these beliefs are actually true or not.
The antonym of a priori is a posteriori, incidentally; or, more often, the English word empirical: from which we get the word empiricism, which is the thesis that all non-trivial knowledge comes from sensory experience. The latter is contrasted with rationalism, which declares that the most reliable knowledge comes instead from reason.
How do you tell if a given item of knowledge is a priori? Kant famously declared that necessity and universality are marks of the a priori, but he was rather swift as to precisely why. He said, rather abruptly, that although experience may teach us that the world is like this or that, it was powerless to tell us why it should just have to be this way rather than that. He did not talk about possible worlds (it should be noted that Leibniz’s most important works were not published in his lifetime, and were simply not known to any of Kant’s teachers), but if he had done, he might have said (with Kripke) that we cannot just go and visit possible worlds to see what goes on there. They are just not there to be found, and even David Lewis (who took the view that possible worlds genuinely exist as independent realities, and not just ways of talking) would (I think) balk at supposing that possible worlds, like distant galaxies, could be viewed through something like a modal telescope.
This would make sense if Kant and Kripke (together with Lewis) were allies on this matter. What is eerie is that they are not. So, let us approach the matter from a slightly different direction.
Just suppose I were to inform you (as I would each year inform my ‘Metaphysics’ students) that, just to be one step ahead of the Joneses, I went for my last holiday to Alpha Centauri. The place is certainly exotic: a sky that is red-and-green-all-over, a geometry almost as strange as that of the centre of a black hole, and a truly unmemorable local cocktail, consisting of 7 cl. of pure alcohol and 5 cl. of pure water. The sad thing is that, when mixed, we do not get 12 cl. of diluted alcohol, but a bit less. Now, I say, in fighting mode, you just don’t believe me, do you? Well, I say in triumph: you have never been there! So, how do you know that my stories are untrue?
There is something to be said about this sort of challenge. Famously, the king of Siam, in John Locke’s day (the mid-17th century), declared that the Dutch ambassador was obviously mad when he declared that in his country (the Netherlands), water in winter became so cold that it went hard and could be walked upon. (This trope was made famous with the 1965 musical, The King and I.) Locke’s point is that you cannot really know about foreign climes unless and until you actually go there (or have a reliable travel literature to consult). Still, there are some tales that even the King of Siam (a rather simple soul, one gathers) could have seen through, notably the one about the local cocktail. That 7 + 5 = 12 is a fact of arithmetic, and cannot be false anywhere. It is a universal and necessary fact, and therefore cannot be anything except an a priori certainty.
There is a catch, of course (there usually is in my lectures), and that is that the cocktail story is actually true – here on Earth! The point is that, when mixed, the water and alcohol molecules interlock giving rise to around 11.95 cl. of mixture. But what does that prove? Does it show that actually 7 + 5 = 11.95, and that mathematicians have been getting it wrong all these years? Well, no. The result is certainly odd, but it is a phenomenon of chemistry, not arithmetic. What it shows is that it is empirically false to suppose (as we did) that adding x cl. of alcohol to y cl. of water yields x + y cl. of mixture. It is there that the anomaly is to be found. By contrast, the arithmetical fact remains true and unfalsifiable by empirical testing, even if the tests yield highly unexpected results.
The idea is that this strategy can be extended to cover all other examples of a priori knowledge. Such knowledge is, in a sense, inside us and projected onto the world of which we have such knowledge. What we observe is thus conditioned by the mind and the way in which we know things. Another example of non-trivial a priori knowledge is the principle of sufficient reason (mentioned earlier), or at least a watered-down version of it: that every observable alteration of a continuing object has a cause.
Experts will wince a little at some of what I have said here, implying, as it does, that Kant is some kind of conventionalist (he wasn’t), but I shall leave a more refined account for later. The general idea is right, however, and it looks plausible enough. So, what is Kripke on about?
Let us look at one of Kripke’s examples, namely the fact that water has a molecular structure consisting of two hydrogen atoms and one oxygen atom. Briefly, that water = H2O. This identity statement is plainly not a priori. We need to perform laboratory experiments on stuff to tell what the chemical composition of a given substance is. Is it therefore contingent (i.e., the opposite of necessary, a truth that just happens to be so and did not have to be)? Kripke famously says NO. There is no possible world in which we have water which is not H2O. If we have something other than H2O, then we simply do not have genuine water, however much it may appear to the contrary: only, what you might call ‘fool’s water’, an impostor substance.
We thus have a necessary empirical truth: and centuries of Kant-inspired orthodoxy collapse into ruins. Other examples of such truths include: gold has atomic number 79; the morning star is the evening star; this (ostensively located) table is made of wood; I am not a direct descendant of William the Conqueror; and so on.
Is that a problem, you ask? Well, yes. With Kant, one might reasonably ask how such knowledge is possible. How can empirical observation reach over into other possible worlds? Come to think of it, how could our a priori knowledge extend that far? Whichever way you look at it, possible worlds become an epistemological nightmare.
1.2.5 Modal epistemology
A simple Google search reveals that the phrase that forms the title of this section has yet to make itself known, so we are presumably dealing with a genuinely new discipline (or, at least, subdiscipline of philosophy). Given the all-pervasive reach of modality, and the fundamental importance of epistemology, it is, perhaps, surprising that this new area of research has not made itself visible a bit sooner. Yet there are a few points that might go some way towards explaining (if not justifying) this lacuna. To see this, we need to change gears and say some more about another branch of philosophy that never really got going until the 20th century, but has been dominant ever since. This is the philosophy of language.
You may wonder (and many do) why contemporary philosophers (both in the English-speaking world and on the European continent) are so obsessed with words. What matters, you may insist, are the things that they refer to, or, at least, the ideas that words attempt to convey. It is our minds (and their contents) that we should be studying; and not their verbal cloaking that tends to conceal more than it reveals. Some talk of a Language of Thought (or LOT): however, it may be insisted, this is to miss the point. We may talk in Spanish (or Japanese), for example, but we do not think in any particular language. Such insistences may seem obviously right, and I shall largely agree. However, it is worth noting that nearly all professional philosophers disagree – and very strongly. But that is for later, and we need to start with an earlier historical epoch.
Let us begin gradually. Although Kant is essentially an 18th century thinker, there is another distinction that is generally regarded as being very linguistic which partners his a priori/empirical distinction, and that is that between what he calls the analytic and the synthetic. A judgment is said to be analytic just if case its predicate is contained within its subject (and synthetic just in case it is not). And what is that meant to mean, you ask? Well, to begin with, it is usually nowadays thought, indeed, to point to a semantic contrast. Analytic statements are those that are true by definition. Kant’s own example is ‘All bodies are extended’. By a body, he means any physical object, whether it be a small particle, a planet or anything in between. By extended, he simply means occupies space. This seems reasonable. For example, at my former place of employment, they would often advertise the Physics department, by claiming that they had managed to create a brand-new universe in a test tube. No doubt, they had in some sense or other. However, if they (the University’s external publicity people) had informed anyone looking at the banners flying along the entrance road that they had managed to discover an amazing new elementary particle that occupied no space whatsoever, we might have smelt a rat. A non-spatial particle is simply a contradiction in terms. We can all see that; and we do not need very expensive laboratory equipment before we can be quite sure about it.
Here is another example, due to the 20th century philosopher, Bertrand Russell (1872–1970): all bachelors are unmarried. Now, if, in keeping with a similar obsession with self-publicity, the University were to advertise the Sociology department by announcing their ground-breaking conclusion vis-à-vis marriage, to the effect that all (yes, absolutely, all) bachelors have turned out to be unmarried and that no amount of searching for married bachelors is likely to succeed, the likely response from the public at large would also be somewhat tepid. Given that the word ‘bachelor’ just means ‘man who has never married’, this analytic conclusion is rather less than breathtaking.
So far, so good. If we want decent a priori knowledge, we had better focus solely on the synthetic candidates, and let the analytic ones fade decently out of sight. Kant’s own mission statement for his magnum opus, the Critique of Pure Reason (1781), is now to answer the question: how is synthetic a priori knowledge possible? And we can see why. Only when the syntheticity is made explicit, can his answer to Leibniz be considered on-message.
But exactly how do we define analyticity? This is often regarded, even by Kant scholars, as something of a side issue, but the general idea is plain enough. By the subject, we mean the person (or whatever) that a given statement is about; by the predicate, we mean what is being said about this subject. Grammarians have used this distinction for centuries, and have even introduced a third item, known suggestively as the copula, whose function is simply to link subject to predicate. (Do not now ask what links the subject to the copula, or the copula to the predicate, or you will incur my extreme displeasure: I shall deal with that problem later.)
Simple? Well, actually no. We still have to explain what is meant by saying that the predicate is contained within the subject. Words do not contain other words, save in a quite irrelevant sense. And if I say (as Leibniz once did, when trying to define truth of any kind, including contingent truth) that to say that the tree is green is to say that the predicate (greenness) is in the subject (the tree), then I am clearly asking for trouble. That the tree is green is as clear an example of a synthetic, boringly empirical truth as you are likely to get. Trees are not green by definition, a non-green tree is not a contradiction in terms (they often go orange in autumn), so the predicate is not contained in the subject of ‘The tree is green’ in the relevant sense. So, what is the relevant sense, then? The answer is by no means clear. That is one problem.
Another problem concerns the sorts of thing that can be true or false, analytic or synthetic. I have used the term ‘statement’, but neither Leibniz nor Kant does. Leibniz talks of necessary or contingent truths; Kant talks of analytic or synthetic judgments, and you might think that this is just a detail; but it isn’t. The term ‘judgment’ is seriously ambiguous, in that it could refer to the actual psychological act of judging; or to the content of the judgment, what is actually being judged to be the case.
1.2.6 The same subject continued
The difference is massive, and the philosopher with whom we most associate making a sharp distinction here is Gottlob Frege (1848–1925). Another German, he revolutionized logic and the philosophy of mathematics, declaring that Kant was just wrong in thinking that ‘7 + 5 = 12’ was synthetic a priori. Instead, arithmetic (said Frege) was just a branch of advanced logic and therefore analytic. But he insisted that it was Thoughts that were analytic or synthetic, and a Thought (with a capital T), was not something psychological at all, but (like a number itself) an abstract entity, neither mental nor physical. Phew, we say. But, so what?
Well, the problem, again, lies with Kant. What did he mean, in this context, by a judgment? A Thought; or a psychological act of judging? The problem is that we can argue the toss either way. The prevailing view (I think) is that Kant agreed broadly with Frege, or at any rate, that nothing much hinges on the precise definition of analyticity; but I shall argue the reverse.
As the cognoscenti know, Kant famously (or infamously) thought that the world was made, or half-made, by the knowing mind; and this doctrine of transcendental idealism (as it is called) is vital to his overall philosophy (which encompasses pretty well the whole discipline). The concepts of (the faculty of) judgment itself (Urteilskraft), and of (an) individual judgment (Urteil), are crucial, and the ambiguities here something that are insufficiently explored, even by experts. Or so I say.
To my dying shame, I allowed my module (‘Idealism, Empiricism and Criticism’) on 18th century philosophy to be taught, not as a final year option, but as a 2nd year option (available also to 3rd years) in the Michaelmas term (as we call it), i.e., October to December; and (thus) a mere month or so after finishing their first-year exams. This, despite my clear recollection (from one of my own tutors) that his own tutor had originally insisted, ex cathedra, that nobody (repeat, nobody) should be allowed to read the Critique of Pure Reason (even in translation) before they reached the age of 21. Americans call such student guinea-pigs sophomores; and my sophomores were made of people mostly well under the age of 21. I cringe and screw up my eyes in response to these damning facts; but, amazingly, not a lot seems to happen to the world outside as a consequence. Oh, dear!
The distinction between what is inside me and what is outside me, as understood by philosophers, is not quite what it seems to the ordinary speaker of the English language. Ordinarily, we would say, for example, that my heart is inside me (located on the left-hand side of my upper body, a foot or so below my head) and that my laptop is outside of me, a foot or so in front of my head. It is just a matter of physical geography.
However, as far as philosophers are concerned, both my heart and my laptop have a similar ontological status (‘ontologicaL’ just means ‘pertaining to existence’): they are physical objects that inhabit space and time. By contrast, whereas the famous tree in the quad (short for quadrangle, by the way) is a part of what they (the philosophers) call the external world (i.e., outside me), my/our perceptions and ensuing thoughts about the tree are part of what they call the mind (which, with a few exceptions, is thought of something inside me). The mind–world distinction (related to what some of Kant’s successors call) the ‘subject–object distinction’ is something this author will have a lot more to say about in the following pages (books are also part of the external world, but the things and people that they are about have to be internal to us – well, where else can they go?).
By the way, by ‘some of Kant’s successors’, I mean, in particular, an infamous personage that philosophers from the English-speaking world dare not even name, lest Satan himself be summoned. His name sounds like /hey-girl/, as it happens: at least, it does to speakers east of the Atlantic, where the letter R is less often sounded, and we shall talk more about him and the subject–object distinction soon enough.
By contrast, a 20th century British philosopher (and D-Day intelligence officer, as it happens), whose recent biography has attracted a good deal of attention, is the very opposite of an obscure German philosopher, especially, the afore-mentioned author of Phenomenology of Spirit (1807) – it is his book, and not his self, that is obscure, by the way. J.L. Austin (1911–1960) wrote little himself – a fact which troubled him deeply) – but his lecture notes on human sensory perception were published posthumously under the title, Sense and Sensibilia (with the literary allusion to Jane Austen made all too clear: you can see what sort of a sense of humour he had). It is here that the importance of paying a keen attention to how we actually use words in ordinary English was emphasized, and the influence on 20th century Anglophone philosophy has been incalculable.
Relevance? Well, let us return to the tree in the quad. I mention, in passing, that, when doing philosophy, trees are, indeed, usually to be found in a quad; unless, that is, you hail from the unfashionable end of the Oxbridge duplex, in which case you will usually find them instead in a court. Another ordinary-language-philosopher, originally from Vienna, but who ended up in this unfashionable end of England (another Austrian Jew who did not get on with the local politicians of the time: rather like his near-contemporary, Kurt Gödel (1906–1978) – remember him?) was one Ludwig Wittgenstein (1889–1951). His name, by the way, is pronounced /loodvig vitt-gun-shtine/, with the emphasis on the first syllable in each word, and with the third syllable of the surname rhyming with ‘shine’. Austin and Wittgenstein did not get on very well with each other personally, though they did know each other professionally, and their supporters typically deny any intellectual resemblance between their philosophical positions: which is a bit silly, really, as there is an obvious overlap – if also a difference in attitude.
What is often known as ‘Oxford ordinary language philosophy’ is basically Austin’s philosophy. Wittgenstein’s teacher and general mentor (whom he was later to disown) was one Bertrand Russell, whom we have already mentioned, an English aristocrat based for most of his professional career at Cambridge, and who was probably more instrumental than anyone in shaping 20th century Anglophone philosophy. Both Russell and Wittgenstein were heavily influenced by (and indebted to) Frege (pronounced /fray-guh/), who himself was a professor of mathematics at the University of Jena (pronounced /yay-nuh/), the same university where, a few decades earlier /hey-girl/ was professor of philosophy. The fair city of Jena (in the now eastern German province of Thuringia) was also the place where Napoleon defeated the Prussians in 1806, a couple of years after Kant’s death).
I mention all this in passing. If you are still concerned about strict relevance, I merely reply that the author has considerable control over the direction in which his text develops, and therefore can make things as relevant as he feels. Such are writers; and the phrase ‘the death of the author’ can be somewhat misunderstood. If you, dear reader, feel trapped in a text (in some weird French sense), that is probably because you are. Having said that, it must be admitted, the image of an irate reviewer sandwiched within the ends of a giant book may be a little misleading.
It is still entertaining from the reviewee’s point of view, however.
1.2.7 An identity crisis
A famous distinction made by Frege was between Sinn and Bedeutung (/zinn/and/bedoy’tung/), usually translated as sense and reference. The reference of a name is the entity to which it refers; by contrast, the sense of name is the manner in which its reference presents itself to the language-user’s mind. Now, this may sound as clear and mud, but an example or two will make everything blindingly obvious.
Thus, consider the following identities
- Hesperus = Phosphorus
- Hesperus = Hesperus
The names ‘Hesperus’ and ‘Phosphorus’ are just shorthand for ‘the evening star’ and ‘the morning star’, respectively, by the way. They are, in fact, two names for the same celestial body, the planet Venus – a fact first discovered (most likely) by some observant Babylonian a very long time ago. They are not obviously identical, however, inasmuch as the relevant observations have to take place about six months apart. Such are solar motions (about which, the Babylonians knew little; and which cannot be deduced merely by looking uncritically at the stars).
By the way, for many years, I was under the impression that the morning star was Mars (and the evening star Venus), and would have vociferously denied (1) had the claim been brought to my attention, so the fact embodied in this identity-statement can hardly be dismissed as trivial.
The same cannot be said for (2), which is just an instance of a well-known logical law, the Law of Identity (everything is identical to itself and not to another thing). A slightly different example, due to Russell, is
- Scott = the author of Waverley
King George IV, notoriously, was unsure of this novel’s authorship, and Sir Walter Scott was a rather peculiar, private sort of person. He (the then king) was not, however, unsure of the truth of
- Scott = Scott
As Russell drily observed, an interest in the Law of Identity can hardly be attributed to the First Gentleman of Europe. So (3) and (4) cannot mean the same thing, any more than can (1) and (2).
The point is that (1) and (3) have considerable informative content, in a way in which (2) and (4) do not; and yet when we look at what these statements state, it is hard to explain why. Frege’s answer is that there is more to a name than simply its bearer. It has, and needs, a sense as well as a reference. Once this is recognized, the mysteries evaporate.
But what, in general, is the sense of a name? I have talked of descriptive content, and it is very tempting to suppose that the content of (1) is simply that one object has two different descriptive features attached to it. The informativeness comes from the difference between the two features, and this is the point. We have only one celestial body, to be sure, concerning which self-identity can uninterestingly be attributed. We have, however, two features, and consequently no reason to suppose that their connection has to be anything short of highly non-trivial, if not very interesting indeed. We talk sometimes, in this context, of the disguised description theory of proper names. Yet, it is the object, not the feature, that we are concerned with when we try to explain where the informativeness comes from.
Kripke strongly (and famously) rejected this theory: and largely in the name of common sense, it should be added. Why? Well, to see why we have might have a problem with disguised descriptions, we must return to Kripke’s own passion, and area of formal logic that we were discussing earlier, namely modal logic.
Note that many identity claims, such as (1) and (3), are not just informative (in the way in which advanced logic, for example, can be informative) but are also empirical. They required observation to be discovered to be true. Are they therefore contingent (i.e., not necessary)? For many years, philosophers have naturally supposed so; now, hardly anyone does nowadays, however, and we need to see why. The point that Kripke makes that names are what he calls rigid designators, whereas definite descriptions typically are not. What does this mean?
A designator is said to be rigid if, and only if, it designates the same entity in every possible world (in which it designates anything at all). This sounds fine, and can be made to sound very commonsensical, but we should beware. The dreaded word ‘same’ appears in the definition, and we do not want to have to understand what it is for X and Y to be identical this early in the investigation. Still, we need to move ahead, so let us let Kripke proceed a little further before we trip him up
Why is the proper name, ‘Aristotle’, a rigid designator, and yet the definite description, ‘The teacher of Alexander the Great’, not? The general idea is that someone else might have been the teacher of Alexander the Great’, but it is just false to say that someone else might have been Aristotle. When we inspect our nearby possible worlds (the faraway ones need only interest us when we have our pure mathematicians’ hats on), we notice that Aristotle is there, all right, but does not always have all the properties he has in this world. He might, after all, have been a chariot-racer as opposed to a philosopher, had he so chosen. Yet he is still Aristotle. With this in mind, I suggest that we consider a different example, one of my own, and consider the letter ‘C’.
Now, it is fact – recently made public – that the Chief of MI6 (the British Secret Intelligence Service), whose real identity may be unknown even to his most of his own staff, is still known as ‘C’.
(You might naturally suppose that ‘C’ simply stands for ‘Chief’, but you would be quite wrong, as it happens. But, no matter.) Now, consider the following claim; and ask whether it is true or not:
(5) Someone else might have been C
Well, if ‘C’ is thought to be a descriptive title, the answer is surely ‘yes’: the man in question was not guaranteed the job from the very beginnings of time. However, if we think of ‘C’ as a name – a genuine one-lettered proper name – then the answer is surely ‘no’. The alternative is too baffling to be understood, let alone believed.
The upshot is that identity-claims, where it is genuine proper names which flank the identity sign, are either necessary truths or else necessary falsehoods. There can be no contingent identity, and likewise, no contingent distinctness.
1.2.8 Some more identity crises
Kripke’s theory is complex in detail, and he admits disarmingly that he does not know how to handle Frege’s puzzle about Hesperus and Phosphorus. But many feel that Kripke is fundamentally right and that Frege was wrong. Indeed, many go further still, and prefer to Kripke an even more radical thinker, one more often thought about when we do moral and political philosophy (the touchy-feely stuff, you may recall), namely John Stuart Mill (1803–1876), a godfather to Russell (as it happens).
Mill does not talk of ‘sense’ and ‘reference’, but he does talk of ‘connotation’ and ‘denotation’, which are very similar. His claim is that proper names have denotation but not connotation, and he gives the example of ‘Dartmouth’, which is the name of a town in southern England. Now, the name literally suggests that its bearer lies at the mouth of the River Dart: and, so it does. This is no coincidence. But – and this is the point – should the river move in time (through soil erosion, perhaps) so that Dartmouth no longer lies at the mouth of the Dart, this is not a problem. We are not logically forced to change the town’s name. The reference is direct, and unmediated by description. Understandably, we tend to call Mill’s theory (and its close relatives) the direct reference theory.
An intuitively attractive way of explaining the difference between descriptive and direct reference theories is to say that, for the former, reference is akin to casting a net. The mesh consists of various descriptive conditions (in practice, a name would have to be associated with a cluster of descriptions of varying degrees of importance), and the idea is to trawl the net through the relevant universe (known, in logic, as the domain) and see what we catch. Should we end up with precisely one entity flapping around in the net, then we have successfully made a reference. Should we end up with none (or more than one), then reference has failed.
Direct reference theories, by contrast, think of the act of referring as being akin to throwing a harpoon. You just aim directly at your target. You might have a problem if your target is over the horizon, and you are using a heat-seeking missile instead of something more suitable for Moby Dick, but let us leave that conundrum for the moment.
But what about informative identity, I hear you ask? Well, Mill was not a particularly hard-working logician, and he might not unreasonably have just forgotten to address the issue; but, oddly enough, he did not. He gives the following example:
(6) Cicero = Tully
This identity is true, by the way: the full name of the Roman lawyer and statesman in question was (and is) ‘Marcus Tullius Cicero’, something that you may not have known at all, let alone a priori. There is clearly informative content here, something which is not to be found in
(7) Cicero = Cicero
So, what is the informative content of (6)? Mill’s answer – a very understandable one – is that it is simply the fact that the two names, ‘Cicero’ and ‘Tully’, have the same bearer. This fact is non-trivial, and so can hardly be known a priori. This account is sometimes known as the metalinguistic theory of proper names, by the way.
It seems obvious, indeed. My problem with it, however, is that it is extraordinary (a) to suppose that (1) and (6) are going to require radically different analyses; and (b) to suppose that what the Babylonians discovered when they discovered the truth of (1) is that two names in a language that did not yet exist have the same reference.
This objection is surely decisive, it is so obviously right. (1) is a fact about astronomy; and (6) is a fact about ancient Rome. They would both still be true even if the Babylonian, Roman and English languages had never existed at all, and we had all communicated directly with each other telepathically.
Indeed, language clearly has nothing to do with our thought at all! It is just a convenient, though often troublesome vehicle of thought: a more-or-less clean window pane through which we attempt to view and understand our ideas, as one of the great masters of the English language, George Orwell (1903–1950), once put it. There are some more paradoxes here, of course, not least of all why a man with a false name (he was really called ‘Eric Blair’), from an upper middle-class background but with working class sympathies, should be thought of as the great champion of Objective Truth, notably the fact that 2 + 2 = 4, a truth that remains true regardless of what The Party successfully forces everyone to believe.
‘Oh, what is he on about now?’, I hear you moan. Well, let me digress a little. You will recall that Orwell’s hero, Winston Smith, is a credible Everyman; and the super-villain, O’Brien, about as scary as you can get. The heroine, and Winston’s love interest, Julia is a more complex character, but potentially very lovable even though she was the proximate cause of Winston’s psychological disintegration (O’Brien and the sewer rats having had only walk-on parts). You really need to read the book (it is called ‘1984’, by the way, though was published in 1948) if you do not immediately get my meaning.
And why is this relevant, I hear you all ask, and with some asperity (I dare say)? Well, fictional names have interesting problems of their own attached to them that lead to a number of weird and wonderful possibilities whose existence may not have been obvious before they were first observed. To throw a medium-sized rock into a rather complacently still pool of water, for example, consider the following identity thesis:
(8) Julia = Fidgety Flora
Now, what idea is being conveyed here? The way to find out is simply to let the idea percolate gently through the system (the waves will eventually settle down and will not propagate to infinity in a world full of friction), and not jump about too much. Leave that sort of thing to the Californians.
Still, weekends need to be tranquil, so I shall consider instead another example: specifically, the name ‘Vulcan’. This was given to the intra-Mercurian planet that astronomers confidently felt must exist, that way being the most conservative – and therefore most convincing – way of explaining some tiny observed discrepancies in the precession and nutation of the perihelion of Mercury. Well, as philosophers of science all know now: Vulcan shows no signs of being there to be discovered; the General Theory of Relativity has superseded Newtonian gravitational theory; and ‘Vulcan’ is an unlikely name for even a small spatial part of the Universe’s mass-energy tensor (what I am getting at here is that the word either denotes a planet or else nothing at all). To put it another – and considerably briefer – way, Vulcan just does not exist.
Now, one thing that we can be certain of is that a planet cannot have an orbital diameter smaller than that of Mercury and also a diameter vastly greater than that of Neptune – the outermost known planet, whose existence and basic character were successfully predicted from observing some perturbations in the orbit of the newly observed Uranus: hurrah for Sir Isaac Newton (1642–1727)! So, if there is such a planet (too far away to be easily observed), call it ‘Bacchus’. (It cannot be Vulcan.) In short:
(9) Bacchus = Vulcan
is something we can all agree to be false. We thus have, in our ontology, two non-existent planets, an improvement on zero (as, I am sure, we can all agree).
(Incidentally, ontology is simply the study of existence and of what sorts of thing exist. It is a branch of metaphysics but not identical with it, as the latter also concerns the nature of what exists. ‘There are more things in Heaven and Earth than exist in your ontology, Horatio’ is what the Bard should have said – had he wished to do justice to the finer points of the English language.)
So, what is the information-content of (9)? It looks suspiciously as though it just says that 0=0, since nothing appears on both sides of the identity sign. However, it would then have to be true, not false, an embarrassment. True and false? Well, some logicians (called dialetheists, by the way) allow for such a thing, but that may seem a bit like overkill in the quirkiness department. At any rate, it seems as though non-existent entities are a breeding ground for disorderly elements, as one rather parsimonious recent philosopher once put it (more on him, later). Something must be done.
But it gets worse. Suppose we consider the following astronomical hypothesis:
(10) Fidgety Flora = Vulcan
Just what idea is being conveyed here? That the fidgety one is as fat as a small planet? That she orbits the sun very fast and probably gets very hot on one side when she does so? That that is what I very, very much want to happen to the girl-of-my-nightmares? A lot is left to the reader’s imagination and interpretation, but not even the most ardent semiotician (look it up) would treat (10) as an open text, as it is called – it is far too concise to be a text of any description. So, evidently something is going wrong again. Something, to repeat, must be done.
And make it snappy, as they say in California (though not at weekends, it is to be hoped).
Well, what’s to do? We got into this mess by looking at some key theses in what professional philosophers call the theory of reference, an important branch of the philosophy of language. The problem is that we expect language to ‘reach out’ to a world that exists beyond what we can directly and immediately observe – lest we be trapped inside our own consciousness, able to refer properly only to what we are directly acquainted with (to borrow a technical conception of acquaintance that we associate with Russell). We need to refer not only to two different dots in the twilight sky but to a single referent (as it is called), a molten and ferociously hot oblate spheroid lump of stuff, the planet Venus, no less. Even though we have no direct reason to think that the two dots are really the same, in some semantically distant sense they are so, and need to be seen to be so – if our minds are to have closure on the far side.
A Russellian world consisting only of our immediate acquaintances would lack this closure, and could have no guard against back-spillage into the unknowable beyond. A primitive mind, looking out to sea in a world that appears to be both finite and flat, also has this problem, but fortunately lacks the insight to realize this.
No, language must enable us to traverse our immediate limits, and the philosophy of language gets going when we see how difficult that is to achieve. Yet words fail us, and in more than one sense. An inarticulate soul can still think and feel regardless of whether she can put any of her thoughts and feelings into elegant language. A scream of anguish is still that, and not just a weird, high-pitched sound. If trapped in a foreign land whose language is impenetrable to me, and whose monoglot inhabitants cannot fathom anything I attempt to say, I might seem to them to be barely human (or, at least, severely brain-damaged). But my inner nature still exists, even if others do not know it: indeed, even if I myself (through a lack of education) do not know it.
1.2.9 Cigar manufacture and a pyjama drama
Yet, it is not just language that needs to reform. It is not just the words that I utter and write that cause problems, but the thoughts that use them. Thus, suppose I ask the question: is Hesperus identical to Phosphorus? What I have is a piece of language, to be sure, but also a piece of my mind (albeit, not in the standard colloquial sense). If my language extends out of the realm of pure consciousness, then so do the thoughts that they express. The technical buzz-phrase that philosophers use here is content-externalism. It is associated with Kripke, but even more so with another, equally influential American philosopher, Hilary Putnam (1926–2016). His famous conclusion, that meanings just ain’t in the head is partly justified through Kripkean semantic reasoning, but also through a recognition that, when it comes to technical vocabulary, there is what he calls a ‘division of linguistic labour’. Expert knowledge is distributed among many experts, and no one single individual is expected to have the semantic know-how to handle everything.
What Putnam did not fully realize himself at the time is that content-externalism is not just a linguistic phenomenon. Given the parallels between semantic and mental content, an externalist will have to add that thoughts just ain’t in the head either. The mind is not internal to consciousness, if that makes any sense. Since then, the suggestion has grown even more, and we talk of the extended mind, the idea that mental processes themselves can take place outside the body altogether. People’s increasing dependence on their smart-phones to remember everything that they need to remember makes this idea look increasingly obvious, and I shall have more to say on the allegedly unstoppable advance of artificial intelligence later.
The mind-body problem is fundamental to philosophy, and I have so far said little about it, except to ask whether my readers think it possible that there be life after bodily death. This lacuna will soon be repaired. In the meantime, I simply note that the mind (or the soul or the self, if you prefer) looks increasingly unlikely to be the brain, an organ of the body that (it is to be hoped) will remain firmly concealed within the skull. This, by the way, is in flat contrast to the orthodoxy that emerged in the 1960s with the advance of what was sometimes known as Australian materialism, namely the view that mental states and events were literally identical with events and states of the brain (and central nervous system). Note the word ‘identical’, again: it keeps cropping up everywhere in philosophy, does it not?
Now, we all talk of the brain when we mean intellect, and a fictional Belgian detective of some renown habitually refers approvingly to his ‘little grey cells’ (he is also a good Catholic, he informs us, and clearly does believe in life after death, as his last case tells us; best not to ask how to square these beliefs). But we are not talking about an extended brain, or a merging of brains, which is just as well, as the idea sounds rather lurid.
I have also said teasingly little about telepathy, a phenomenon that most responsible thinkers regard to be utterly bogus. A direct transference of thought between minds seems to require brain-to-brain connections that are surely contrary to the laws of physics (never mind biology). Unless there is something to this talk of non-locality and quantum entanglement, after all, a possibility that a wise scientist will not want to reject out of hand: we just don’t know enough to be sure that our spatiotemporal intuitions, although honed over many millennia, are actually true.
However, one of the aims of this book (as the blurb on the cover reveals) is to provide a prolegomenon for a new ology, a science of ideas as such, as distinct from their linguistic cloaking. This would mean training my readers into how to become telepaths, to communicate their thoughts and feelings directly – and without the need for clumsy brain-implants (though there might be a use for the latter). I am being quite serious here.
The literary model (and proto-instruction manual) I have in mind is the fantasy novel, The Chrysalids, by John Wyndham (1903–1969), regarded by cognoscenti as possibly his finest book, despite not being a dystopia (unlike most of his other works). If it is impossible to turn a member of the species Homo sapiens into a chrysalid, it is not clear why. Evolution happens, after all, and the usually agonizingly slow process can be accelerated dramatically if we have the technology to alter ourselves according to an evolving plan – and I think that we do, or soon will. Moreover, Wyndham’s prose is lucid, and hidden contradictions in the story seem to be well hidden. (Admittedly, as with tales of time travel, that might just be a sign of good authorship, rather than a reliable sign of a freedom from contradiction.)
I dare say that this news will be greeted with rather tepid enthusiasm by some of my more conservative and down-to-earth colleagues, those who have managed to read this far without throwing the book out of the window in disgust, that is to say. Well, so be it. In the meantime, here is an answer to an objection to my prose-style that I telepathically know you, dear reader, to possess.
The objection is that I tend to introduce a topic, perhaps make a few amusing remarks about it, but then move on (like a butterfly) to another topic, and so forth. A rolling stone gathers no moss, as we all learnt as children; and a convincing conclusion requires a sustained and highly disciplined argument that this work does not show (runs the objection). At least, it has not started to do so yet, despite having exceeded the 25,000-word mark.
Amen. My reply is simple, and is that this part of the book is called ‘The Overture’, not ‘The Introduction’, and we need to remember what overtures are for. They are there to introduce musical themes that are later developed so as to introduce an early familiarity. Light operettas, such as those written by Gilbert and Sullivan, make extensive use of them, and they ensure that even the musically inexperienced can recognize and hum the central melodies, both in the street as well as in the auditorium.
At a more dignified level, we have, for example, Bizet’s Carmen, a serious work (though not quite on the level of Wagner’s Ring Cycle and – in particular – his depiction of the Valkyries about whom we have already said quite a bit, it may be recalled.) The passion of a Valkyrie, to continue the point beyond the close of parenthesis, is not easily expressed in simple prose; but music is evocative in a way that intellectual debate is not, and the thalamus needs as much tender loving care as does the cortex – and can sometimes be more easily satisfied.
But to return to Carmen. As most of my readers will know, the plot concerns the antics of the eponymous anti-heroine, a very young Spanish lady of questionable virtue who helps to manufacture cylindrical and highly carcinogenic items that gentlemen of distinction insert into their mouths and set fire to, breathing the noxious fumes into their own lungs and those of others (and for no obvious reason). Among her many friends is a bull-fighter, which a recent British sitcom character described as being essentially a hitman for Fray Bentos (a meat-selling company from Latin America, one which sells much tinned steak-and-kidney pie in the United Kingdom).
This is quite an amusing idea, of course, though somewhat at odds with the steely determination with which he (the toreador, as he is called in his native Spanish) engages the bull in mortal combat. There are a few other strange ideas as well that would not have been allowed any theatrical publicity by England’s Lord Chamberlain (a post from a bygone day), had he had adequate control over the proceedings. However, the presence of music changes all that. We are excited by the opening theme, instantly memorable; and do not mind the suddenness with which it morphs into the extremely famous Toreador melody, which in turn morphs into other exotic leitmotifs. Nor are we particularly annoyed by the Overture’s abrupt and inconclusive ending. It just indicates that there is more to come, when the curtain finally goes up to reveal the dramatis personae on stage ready to play their parts to perfection.
But Bizet couldn’t have done it without his overture. Now, I know that classical music is widely regarded as something that only old fogeys like myself take seriously, and that younger readers and listeners require a more reliable rhythmic structure. Indeed, the passion now seems to be for what is called soft rock (a contradiction in terms, I should have thought). Well, so be it. My own favourite here of the genre, the piece that I wish you to think about whenever telepathy starts to take place (to your simultaneous delight and consternation) is by an arty English band first formed around 1970. Being a sort of electronic rock band, its name could be ‘e-rock music’, and it almost is. But my suspicion of proper names continues. The song title is more easily conveyed since it sounds rather like a pyjama drama, appropriate given the bedtime fantasies with which so much modern music seems obsessed. The opening throbbing sound, with its inconclusive descending bass-line, is particularly apposite, especially if you listen to the live version. The album cover (of the record where the live version appears as the second track from Side 1 – it has the Latin name that means ‘alive’) portrays, alongside the lead singer and band‘s mastermind, a sultry brunette who could be called ‘Carmen’, for all I know. At any rate, she makes a thankful contrast to the rather vapid blonde horror that has used up so much of my limited emotional energy – and only yesterday. But enough of her.
The point of this section is to introduce the idea of telepathic communication to a sceptical readership. Computers can communicate directly with each other via the internet and wireless Bluetooth signals, and such communication is far swifter and more reliable than old methods. Quantum computing will develop this even more effectively. Perhaps, minds – and even brains –could do so in the future? A USB-port on the side of the neck could soon become a genetically engineered feature of newly born children (or, possibly – like female breasts or male facial hair – of adolescents). At any rate, there does not seem to be any a priori argument that conclusively refutes the suggestion: so, my more critical philosopher friends should exercise some restraint before attempting to consign my thoughts to oblivion.
To proceed further will (eventually) require collectively gathered results, and not just the insights that arise out of individual-led research, however estimable the individuals might be. Now, the philosopher that we most associate with this collective ideal of progress is, surprisingly enough, none other than J L Austin, first introduced in 1.4.3, who (after the war) managed to dominate Oxford philosophy until his premature death. His famous (or infamous) weekend meetings with like-minded philosophers – known as his ‘Saturday Mornings’ – were definitely his meetings, and very tightly controlled. The extent to which his approach would be desirable here and now is debatable; and many still think of Austin’s leadership as a reign of terror. I shall say more about this later.
1.2.10 Ideas and the birth of an ology: I
Now, how does all this engage with my telepathic ambitions? Well, when confronted with a radically new phenomenon, a standard method of bringing the new elements under intellectual control is to start naming them. So far, we have the familiar suffix, but to call a discipline an ology (and that is all) is a bit feeble. True, there is a certain appropriateness here inasmuch as ‘Ology’ is the name of an advertisement (viewable on YouTube) for BT (the then nationalized British telephone company), an ad which features a well-loved English comic actress who plays, to perfection, a sympathetic (if slightly dotty) grandmother: and we need the idea of the science of ideas to be as non-threatening as possible, of course. Possible names include ‘ideology’ and ‘teleology’, but they are unfortunately already in use. ‘Telepathology’ is a bit of a mouthful, and also sounds like something of more interest to a TV repairman than to an analytical philosopher.
Failing the naming test, we can at least start to emulate the biologists and start classifying things (even before we have much notion of what they are). With this in mind, we can then meaningfully ask what sorts of idea there are. There does, at least, seem to be some considerable diversity here.
For a working definition of the term, the best of which I am aware comes from John Locke (1632–1704), about whom much needs to be – and will be – said. A founder member of the school of (what we now call) classical British empiricism, he strongly rejected the possibility of what he called innate notions, and insisted that all our ideas and our knowledge came from experience. Grand rational-looking schemes were viewed with great suspicion – as something that could only be speculated about, not known about. Genuine knowledge is hard to achieve, but is achievable. It is just that it is essentially probabilistic, and not rooted in quasi-mathematical certainties. The modern idea that the true scientist is an essentially cautious individual, reluctant to state categorically that she knows this or that for an absolute fact (despite being an Expert), owes much to Locke and his magnum opus – known generally as the Essay.
Related to this fallibilist epistemology is a cautious political philosophy, conscious of the potential for abuse from rulers, one which emphasized the need for the separation of powers. He was enormously influential on the American Founding Fathers, for example, though his own home-grown revolution was more modest in scope. I am here referring to what English historians call the Glorious Revolution (of 1688) – where William of Orange was invited to invade England from the liberal Netherlands (where Locke was temporarily exiled) to overthrow the Stuarts and become King William III of England (a joint position held with his wife, Mary Stuart, daughter of the deposed James II).
I mention all this, because it is hard to understand classical British empiricism in general, and Locke in particular, without knowing a little about its historical context. I should add, whilst in full flow, that these events occurred nearly twenty years before the Act of Union (1707), and that Scotland and England-and-Wales were then separate nations. James II was – and is – known to the Scots as James VII (for the excellent reason that he was the seventh king of Scotland called ‘James’), and not all Scots describe the events of 1688 as ‘glorious’, particularly if they are Roman Catholic. Ireland, by the way, was not united with Britain until 1800, some forty odd years before the terrible potato famines that spread across Europe, and exactly 201 years before 9/11 when nearly all Americans, including those of Irish ancestry, decided that terrorism should not be sponsored however just the cause may appear to be. It was, and is, an open secret that IRA terrorism was financed not only by local gangsterism, but also by Irish-Americans who were still enraged by the potato famines which caused their ancestors to emigrate across the Atlantic. Credit for the Good Friday Agreement (or GFA, which finally ended the Troubles in Northern Ireland) has sometimes been attributed to Osama bin Laden, the chief architect of 9/11. However, since GFA occurred three years earlier in 1998, that is a little unlikely, as there is little evidence that al-Qaeda recruited Time Lords, as they are known in the UK, into their ranks.
That the GFA still holds today, despite Brexit (the disastrous referendum result which gave rise to it was held in 2016), however has plausibly been attributed – though only in part, obviously – to the rude awakening felt by all Americans on that fateful day of 9/11 which marked the beginning of the new millennium, a people who had had no direct experience of war at home since 1865, when Robert E Lee surrendered his army at Appomattox Court House in northern Virginia.
Relevance? Two morals are to be drawn from the ramblings of the previous paragraphs. First, always check your dates – unless you really think that effects can precede their causes (some philosophers of physics do so, but few philosophers of history do likewise); and secondly, ponder carefully what is meant by saying that A causes B, particularly if your definition implies the transitivity of causation (i.e., the principle that if A causes B and B causes C, then A must cause C). Since causes tend to be partial, not total, transitivity is far from obvious. See 1.2.2, above, if you need to refresh your memories on this problem.)
Anyway, an idea, for Locke, is always the first port of call when it comes to an investigation: it is simply something that is immediately present to the mind, what we are directly aware of. He said confidently that he did not think that he needed to say much more here, but his successors have been much less sure of this. True, the English word ‘idea’ is perfectly ordinary, and it has equally ordinary counterparts in other languages. However, there are many different kinds of thing that can be called an idea.
For example, an idea could be a concept or meaning of a complex word, something that may require considerable intellectual effort to grasp. It could also be a perception of some kind, which (in the case of vision) is often thought of as a kind of picture in the head. It could, of course, be a mere hallucination (something that Macbeth saw when he saw a dagger, or that Joan of Arc heard when she heard voices), an important point since it emphasizes that ideas are essentially mental, and can exist in the absence of any external, physical cause. Non-existence is also often cited here: thus, Pegasus (a flying horse from Greek mythology) and the Loch Ness Monster are not creatures of flesh and blood, but are instead ideas in our minds. An idea could even be something as mundane as an after-image.
We certainly have diversity here, and this should signal a warning. Locke argues that meanings of words are not their references but the ideas that we associate with them, a theory that can usefully be compared and contrasted with Kripke’s more recent account. This sounds pretty obvious, but most philosophers nowadays think that it is a disastrous error, for various reasons that will be explored later, and in some detail. For the moment, I just draw to your attention the massive difference between an abstract concept and an after-image. Can I seriously improve your grasp of the former by stimulating your visual cortex in such a way as to produce vivid examples of the latter? We only have to ask the question to see how absurd it is.
Yet language is used to transmit ideas from one person to another, and we need to get a good grasp of just what is being transmitted and how. Only then, can we see how to speed up the process, and allow direct idea-to-idea communication that bypasses language altogether – which is the basic concept behind telepathy.
Thus, imagine (for the sake of argument) that we are machines (as Alan Turing did): and instead of laboriously translating our internal states into public computer languages, we stick with machine code (as it is called) and turn lots of connected little machines into one big machine (with some variegation within it, to be sure). The very idea of the ‘internet’ assumes something like this.
But what does the human analogue of machine code look like? As mentioned (in 1.2.5), some have talked here about the Language of Thought, or LOT, but this is mysterious. Is my LOT the same as yours? How do we find out? Where can I purchase a LOT-English dictionary? Or can they only be found on sale in LOT-land? Why can I not order a copy online? No, we may find that we have no name for this whatever-it-is that we are looking for, and possibly no hope of understanding it if we ever did; for what is more mysterious to us than our Selves and their internal workings?
1.2.11 Ideas and the birth of an ology: II
However, this pessimism is artificial, and is obviously not shared by you, dear reader, or you would not be here reading this right now. Self-understanding and self-improvement are the chief goals of education, and you are clearly an educator (or educatee) of some kind.
Unless, of course, you are the person who is – right now – browsing the pile of new best-seller books on display at the fancy newsagents at Manchester Airport (Terminal 1). If you are, then I suggest you purchase this book (which you have already soiled with your horrid unclean hands) before the rather burly security officer, now approaching you from behind, gets to you, and before you can say ‘hello’ to that rather nice-looking young lady behind the sales counter – a lady who has been trying to attract your attention for some time now, by the way.
But to our tale. Now, it may be thought that ideas are exactly the sort of things that can easily be studied. We do it all the time, and not only when we are in work at the uni. So, what sort of Petri dish do we place them in in order for our careful analysis to begin? Not one to be found in the Department of Biosciences, to be sure, and for more than one reason. First, there is the obvious point that ideas are mental not physical, and only physical things can be put into Petri dishes. But secondly, and more interestingly, we cannot put ideas into even metaphorical Petri dishes so that we can coolly observe them from a detached viewpoint. This is because ideas are a bit prudish, and do not like being looked at by outsiders, especially if they have been detached from their immediate surroundings and dumped into a metaphorical Petri dish. To see this, try looking at your own ideas, racing past you in your stream of consciousness – and without turning back even to say hello (let alone give you a sly wink of acknowledgement that means that you and they are to be soul-mates). A mere bundle of perceptions, as they say north of the border, not tied together even by metaphorical string, and completely out of control – indeed, a veritable St Trinians (look it up) without a headmistress.
Quantum physicists worry about the fact that you cannot study experimentally a small and delicate entity like a particle without interfering with it in some way. Try to measure its position accurately and you knock it off course thereby undermining its momentum; and vice versa. I spoke earlier about the subject–object distinction – something to do with /hey-girl/, you will remember –which essentially encapsulates a certain model of reality and our relationship to it.
Let us change the image. When a security man looks at the screens in his control room connected to all the CCTV cameras (it is, mostly, a rather tedious and poorly paid job, as I dare say you already know), he may get a feeling of God-like omniscience. Everything is there to be seen: except himself, lonely and isolated from the reality he surveys.
To change the image again, he is perhaps like a Roman Catholic priest, able to see (via the confessional) into the souls of all his parishioners. At least, he may feel that he has this ability.
Now, quantum physicists, security guards and RC priests do not appear to have much in common, but they do. They are all careful observers, and will not last long in their respective trades if they keep missing things. But like everyone else, they are not just passive observers: they are sometimes required to act on what they see and hear – or, at the very least, find a very good excuse for not doing so. But in acting, they interfere with their environment, something a good environmentalist can only feel guilty about (I mention this in passing, but much more on environmental philosophy later). The quantum physicist worries about the distorting effects of measurement, and whether she can ever produce a reliable model of reality, as tradition requires her to do. The security guard, confronted with a sudden breach of the peace, will need to use only minimum force to restore order, and not contaminate (any more than is strictly necessary) what has now become a crime scene. The RC priest must treat the seal of the confessional as absolute, even if he hears the confession of a serial killer who smirks at his conduit-to-God while announcing cheerfully what he intends to do next.
As I say, we are not just passive observers. This is partly because we tend to move around in a world that stays (mostly) fixed, but also in a deeper sense. Even if we are totally paralysed, we also need to act, if only to exert acts of pure thought. Some paralytics are lazy and some are not, a contrast that would be impossible if there were no genuine agency, if only within the mind itself. And to act without being conscious of acting is not really to act at all, but merely to behave. To be sure, the distinction is not absolute in the sense of admitting no fuzzy borderline: for example, is breathing an activity? It is automatic (usually, unless you have a certain kind of illness), and continues through sleep, but can become as tied to the Will as is possible to get – if, for example, it becomes difficult to do. Nevertheless, the distinction is real.
The magic phrase is Self-Consciousness. Kant thought of all meaningful experience as self-conscious experience. He, rather oddly, did not consider non-human animal perception even though the golden eagle (who has intuitions without concepts – I shall explain what that means later) has a vision even more acute than does its human critic. We shall ponder that omission at some length in due course, but in the meantime, note that ‘self-awareness’ can mean many things. It can refer to a kind of insight, or insightfulness, something whose severe absence is a defining mark of a psychosis (as opposed to a mere neurosis). But, although a cat who chases its tail has, perhaps, a lack of awareness that its target is a part of itself, I doubt if many veterinary surgeons would diagnose any psychiatric abnormality in the average domestic feline merely for this reason.
Well, can we be more precise as what we are talking about? Kant terrifies us all by talking here of transcendental apperception, something to be carefully distinguished from empirical apperception (I remember my B Phil examination on Kant had, as a question, the demand that this this distinction be explained. It was a question I carefully avoided.) The gist of what all this is about is that the maelstrom of perceptions that make up the mind must have some sort of unity, something that makes them my perceptions (and not yours, for example). As noted above, this can be strangely difficult to do, though the connection with a single physical body provides an obvious solution. (Given that a disembodied consciousness does not seem to be a contradiction in terms – the agonizing problem of figuring out what happens to us after bodily death would be a lot easier to resolve if it were – we might wonder, however, if embodiment is really what we should be looking for here.)
But it is around the time of Kant that the history of philosophy (at least, the Western part of it) met a fateful crossroads. Intellectual historians talk of the Enlightenment and the Romantic Movement, and ponder the extent to which the latter can usefully be thought of as a sensible reaction to the excesses of the former. The long 18th century (1688–1815) is sometimes known as the Age of Reason, and therefore one particularly well suited to the activity of philosophizing. But philosophers cannot ignore something just because it does not fit their preconceptions of what is worth thinking about, especially if we are talking about something that may (or may not) be fundamental to human nature itself. Intellectuals love dualisms, and the fancy is that it is around about the early 19th century that the Western philosophical tradition split into precisely two rival traditions. On the one hand, we have the mostly English-speaking analytical tradition that maintained a continuous link with its (mostly) ancient Greek origins. On the other hand, we have the mostly German (and French) speaking continental tradition that rejects its historical ancestry – and, indeed, would repudiate the idea that it was part of a tradition.
Whereas the former is essentially argumentative, the latter is essentially quarrelsome. Unlike our counterparts in the English departments of the 1970s and 1980s, however, analytical and continental philosophers do not actively hate each other, if only because the analysts are nearly always in the majority and can maintain a civilized order. The central areas of the subject (logic, metaphysics, epistemology, ethics, and so forth) are taught predominantly by analytical philosophers, whereas the options (aesthetics, philosophy of religion, anything-to-do-with-critical-race-theory – and, of course, feminist philosophy) tend to be assigned to the continentalists. People are not always satisfied with this arrangement, of course, but since the whole subject is under severe attack from the friends of (what I believe are called) STEM subjects, we tend to forget our differences. Tend to.
I notice, however, that, once again, the hour of luncheon is approaching, and this brings our weekly chinwags to an end. Even on Saturdays, the Oxford college chefs (who are the real masters of the University) have iron rules as to who can eat what and when, especially in the 1950s, when wartime rationing was in the process of being turned from a daily necessity into an unwanted memory. Figures with names like /hey-girl/, or even /high-digger/, did not sully the smell of peat-fires with that of continental sulphur in any noticeable way, and odd people like Wittgenstein (remember him?) would relax on Saturday afternoons and evenings by watching Westerns at the local Nickelodeon. In the world of Hollywood (where’s that?), America’s western border was a Frontier beyond which lay Men who were real Men, women and injuns who knew their respective places, horses that could fall over without getting hurt, and guns that were properly used – that is to say, predominantly on men wearing black hats (but whose accents were neither German nor Japanese, oddly enough).
I have just been informed that Hollywood is in a place called California, and that there may be more work to be done over the weekend. No doubt, my informant is right; but tomorrow is another day, as they say.
1.3.1 The kinks and the small faces
A late and leisurely Sunday lunch, far removed from the rather panicky repast of the previous day, and time to reflect on the movies, as they used to be called. Yes, I know: the previous sentence lacked a main verb. I also notice these things.
Well, what is there to say about the movies? At an earlier University (or Institute of Higher Education, as it was called then), one much nearer to home and where I worked for nearly thirty years, we introduced a Film Studies pathway. That is to say, an opportunity to study for a BA degree in X & Film Studies, for various other pathways known as ‘X’, in the Humanities department of said Institute. Philosophy & Film Studies, for example. Now, it is a truth universally acknowledged that a new pathway must be in want of a core module (as we call it), and so it was duly given one. The natural title is ‘Introduction to Film Studies’, but some wiseacre thought it might be sexier to call it ‘Saturday Night at the Movies’. We had a serious problem with student recruitment then (plus ça change, as they say in France), and we desperately needed to bring the punters in, if only to get them to study the X part of the joint honours programme. Fortunately, I was able to exercise some influence here, and suggested that the original, natural title was more dignified and might help to attract the sort of student that we wanted to attract (people like you, dear reader, and not the riff-raff who merely study at uni because their parents think they ought to).
But what is wrong with the word ‘movie’, you might wonder? ‘Film’ sounds a bit more upmarket, to be sure, and its adjective ‘filmic’ a sign of great intellectual depth, but it derives from the fact that the physical realization of still photographs that make up each element of the film, so called, is a thin piece of plastic. Everything is digitized these days, thus making the terminology archaic. ‘Movie’, by contrast, relates directly to movement which is what this phenomenon is directly about.
You may be on to something if you think in this way, and here is your starter for ten: what is the connection between your local cinema and the deep metaphysical thesis that heat and molecular kinetic energy are as Hesperus and Phosphorus (i.e., one and the same thing, though you would not think so to begin with)? The answer is the Greek word kinesis which means ‘movement’. ‘Cinematography’ just means ‘movement-writing’, and this is something worth knowing. Had you explained to anyone before its invention (which was around the beginning of the last century) that there could be moving pictures, they might have felt that you were a magical emissary from the world of a fictional boy-wizard who had wandered into the real world whilst labouring under a misapprehension of some kind. Or something like that.
I once saw an excellent film about the origins of films which starred Laurence Olivier as an Edwardian police constable who was confronted with an early, rather clunky movie that showed a royal procession. The expression on Olivier’s face said it all. Without apparently moving a muscle, his visible mood changed from an initial sceptical contempt to sheer outrage to utter amazement to total-admiration-for-something-clearly-miraculous.
Well, we all know that, unlike the magical world of the-boy-who-lived (where the underlying causal mechanisms are neither talked about nor investigated), it is all a trick. The brain is deceived if the photographic stills are changed fast enough. Here is a related trick: we see the second hand of an old-fashioned analogue clock actually move (whether it moves continuously or by sudden clicks). By contrast, we do not see the hour hand move, though we can see directly that it moves (just stare at it without blinking for a quarter of an hour or so). The minute hand is more complex. It moves slowly, but not that slowly, and some people can see it move, and some can only see that it moves. Just how and why this happens is studied by psychologists when they investigate what they call cognition. Take a psychology course if you want to know more about that sort of thing.
Still, nobody doubts that there is movement, we say confidently. That the world is changing around us is as directly obvious as anything can be. True, if you are an American philosopher and psychologist called William James (1842–1910) you will need to talk about the specious present, a present that lasts for rather more than an instant (at least one and a half seconds, according to some estimates) in order to explain this; and this specious present cannot be just the width-less boundary between the past and the future, a boundary that itself moves forward at the majestic rate of one second per second (if it were width-less, it wouldn’t last for long enough for motion to be directly perceivable).
Some philosophers won’t accept this, believe it or not, and insist that the real world (known about through proper, intelligent thought) is wholly unlike the illusory world of appearances. Time and change are just impossible ideas to make sense of. This idea goes back at least as far as the Eleatics, in particular the pre-Socratic philosophers Parmenides and Zeno. You may think that you have never heard of either of them, but you almost certainly have heard the story about the hare and the tortoise, and why the former never manages to catch up with the latter. That comes from Zeno.
Belief in the unreality of time (and, some say, of space as well) is a perennial theme in the history of philosophy, and was also held by Leibniz and Kant, albeit in subtly different ways. The dictates of reason, together with the push-back of outraged common sense, form the backdrop of much philosophical debate down the centuries, and you shall hear more about it shortly.
But to return to Laurence Olivier, a fascinating individual in his own right. He was ennobled for his thespian talent, but it is generally thought that this was primarily for his work in theatre, not cinema. Now, my former Humanities department also offered a pathway in Theatre Studies, and you would naturally suppose that it would attract a similar kind of student to those who do Film Studies. You would be right. The people who actually create the artistic products we call plays and films are called ‘actors’ (and sometimes also ‘actresses’ – more on that, later). The word suggests simply a person who acts: i.e., performs actions. But, to avoid confusion, in what is called the philosophy of action (or action theory), we prefer the term ‘agent’ to be used in this context. We already noted that even the total paralytic performs actions, if only mental acts (as they are usually called). In short, she is an agent (and regardless of her thespian talents, should she have any).
You probably knew most of this anyway, of course. But if you think you can now deduce what is meant, in the trade, by a ‘theatrical agent’, then you are more knowledgeable than I am. We need to beware of terminology: it can get technical without your immediately noticing.
By the way, the word ‘theatre’ comes from the Greek theáomai, which means ‘to observe’. The word ‘theory’ has the same root, incidentally: a fact of some significance, I think.
In the meantime, the associated music for this chapter involves a choice between two, rather similar-sounding titles from the 1960s which, I dare say, you can work out from the usual clues in the section title. One involves the agreeable ramblings of a millionaire who is rapidly losing all his money; the other contains the memorable phrase /a-rooty-dooty-too; a-rooty-dooty-dye-day/. Enjoy.
1.3.2 Theatre, cinematography and Roman history
Theatrical phenomena are very old, and appear in many (perhaps) all cultures to some degree. Theatrical performances can be understood by the illiterate, and were certainly common before the invention of writing. It is hard to prove this without written records, of course, but I hazard the guess that theatre is as old as human intelligence. If you want to convey a certain kind of complex idea, you have to act a part; and it must be done in such a way as it make it clear that you are ‘only acting’.
The actors (traditionally, only men and boys) who played their parts in ancient Greek tragedies (pretty frantic stuff, as you probably know) had to arouse certain very intense emotions in their audiences – but without getting lynched themselves. Exactly what is going on here and how best to describe (and explain – and even justify) it involves many areas of philosophical thought; and more, much more, will be said about it later.
In the meantime, I just repeat that theatre studies is, or should be, fundamental, if only because it is so old. By contrast, film studies is very new. So, although your local cinema may once have been known as a ‘movie theatre’, there is some tension at work that is not easily assuaged by the fact that film actors and theatre actors tend to be the same people (with very similar talents, notably the ability to pretend to be what they are not).
Both film and theatre make heavy demands on our sensory organs. What we hear is crucial, of course, but we should also remember how good the early silent movies could be (particularly, the horror films). As an antidote to this, we should also remember that radio comedy and drama can be spectacular if only because the imagination is forced into creating the visual imagery needed to complete the fantasy world suggested. A British series from the 1970s about an intra-galactic hitchhiker is particularly to the point here. Nevertheless, the visual has always been predominant when it comes to philosophical investigation of empirical knowledge, and I shall (for moment) follow this tradition.
It will be recalled that our problem with studying ideas in the raw, as it were, is that they tend to bite back when placed on Petri dishes. This point gave rise to some amusing and irreverent imagery of its own, as my reference to St Trinian’s showed. As a corrective to this, I shall consider the views of another 20th century Oxford philosopher, Waynflete Professor of Metaphysical Philosophy, and seriously neglected despite his clear relevance to modern debates and beautiful prose style. This is R.G. Collingwood (1889–1943), a distinguished archaeologist and Roman historian in his own right as well as a philosopher; a man whose autobiography consisted almost entirely in reflection on the development of his thought (at the expense of his personal life); and who is best known for his work on the philosophy of history, and for his visceral contempt for the (in his view) pseudo-science called ‘psychology’. Explaining human behaviour is an entirely different task from explaining the behaviour of inanimate objects, such as planets and lumps of stuff to be found in chemistry laboratories, he thought. The former requires reenactment, as he calls it. The latter emphatically does not.
I shall develop these themes in more detail later, but to illustrate the point I shall give some examples. If you want to explain why Caesar crossed the Rubicon in 49 BCE (Collingwood’s own example), you do not attempt to find laws or regularities that say that whenever a would-be dictator wants power, they cross rivers (or whatever). Such laws do not exist, and would in themselves explain very little even if they did. What you have to do is to re-enact what Caesar wanted to achieve in your own mind and then see how yourself would act if you were in that position. If you succeed, then this human phenomenon is rendered intelligible and has thus been explained. (Oddly enough, recent psychological theories – known as ‘simulation’ or ‘co-cognition’ theories – say something similar: but more of that later.)
By contrast, if you want to explain a purely natural phenomenon, such as the eruption of a volcano, you do not try to imagine what it is like to be a volcano and see if you too would erupt if you were in its position.
It was not ever thus, of course. In olden days, it would be concluded, after the disastrous eruption, that the volcano-god was very angry; and to correct this problem (and to forestall a repetition), the local elders would have to figure out how best to assuage this anger. Tossing a few local virgins into the crater might be a start, it might be thought. It is easy to sneer here, but anthropomorphism is the beginning of science, of a serious attempt to understand what goes on beyond ourselves.
Likewise, if you wonder why dropping a small lump of metallic potassium into a beaker of water causes it to burst into violet-coloured flames (a most delightful phenomenon, as you might remember from your school chemistry lessons), it is not helpful to say that that is probably what you would do if you yourself were so treated (and had possessed potassium’s unique internal constitution). Basic inorganic chemistry has universal laws and explanatory models involving atoms and molecules that do a much better explanatory job; and it is possible that one day we may even be able to use knowledge of that kind to tame volcanic activity. Natural phenomena, unlike would-be dictators, are not best regarded anthropomorphically. (The volcano and potassium examples are mine, by the way, not Collingwood’s.)
Now, a characteristic feature of the Enlightenment philosophy of the long 18th century is the recognition that the new sciences of physics and astronomy were extraordinarily successful. Newton’s predictions were jaw-droppingly accurate, and his theory of gravitation could explain a mass of apparently unrelated phenomena, such as the behaviour of projectiles, the orbiting of moons and planets, and the rhythm of the tides. If only we could understand ourselves in a similar way! A famous work of that period (about which, much, much more later) is subtitled ‘An attempt to introduce the experimental method into the moral sciences’, and emphasized the importance of just looking very carefully inside our minds to see how they work.
This approach is obviously sensible. You copy the successful players, and the moral sciences (to use slightly archaic terminology) are singularly unsuccessful at the moment, as the bottomless well of human misery shows very clearly. The astronomers and physicists, by contrast, tell us all; and can command fantastic sums of money to build ever more elaborate telescopes and particle-smashers.
Naturally, the moral scientists copy the physicists and astronomers, perform masses of experiments and precisely quantify everything that can conceivably be precisely quantified (you can’t do physics nowadays if you can’t do the math, as they say). Do that, and eventually psychology will find its Newton: and all will be well with the world. Much social science (the magic word here is ‘positivism’, by the way) has similar ambitions.
Collingwood strongly rejected this model of explanation. It is perfectly acceptable to anthropomorphise human beings: this is pretty obvious, when you think about it! And humans are not like planets, for they are far too complicated. Furthermore, to get back to the point, our ideas must be understood not as things that can be put on Petri dishes. You would have to freeze the ideational flow in order to do this, and mental activity will simply die if it is made to stop in that sense.
But, as far as I am aware, although a very distinguished aesthetician in his own right, Collingwood did not consider cinematography and stage-theatre in this context; and in consequence missed a trick (I think). The point about the former, particularly in its original analogue technology, is that yards of film can be detached from its surroundings and studied independently. The act of creating the finished product involves (literally) cutting and pasting together these yards of film in a way that only film directors understand. (The actors will have gone home long ago; that is to say, long before the real nitty-gritty of film-making properly starts.) Much of their hard-won attempts at acting will, of course, have ended up on what we still call the ‘cutting room floor’, a phrase that has taken hold of us despite our living in a digital age.
1.3.3 The manipulation of visual ideas
The point that I am getting at is that cinematography does provide us with the wherewithal to produce an ideational science, the ology that I am consciously trying to create. Finished movies can be created out of a great many massively-edited yards of film, just as complex molecules can be chemically engineered out of individual atoms, atoms which may not easily exist alone and in isolation.
We have not yet got the final product when we have finished the actual filming – even when we paste together the bits in the right order (usually, not the order in which they were filmed). On the contrary, what must then happen, is that we examine critically the initial draft of the film. This requires an interaction between the object-film (i.e., what we are studying) and the meta-film (the filmic activity going on in the mind of the investigator. I borrow the ‘object’ and ‘meta’ prefixes from formal logic, where we are required to distinguish between the object-language (the language that we are studying) and the meta-language (the language that we actually use when we talk about it). Usually, the object-language is an artificial mathematically-shaped idealization of an ordinary, natural language. The meta-language is usually English (or French or whatever), fortified with some mathematical symbolism.
Artificial object-languages were considered very exotic things when they were first invented: one of the earliest was what Frege called Begriffsschrift, or ‘concept writing’, introduced in the late 19th century. That morphed into what is now called ‘predicate calculus with identity’, or just ‘first-order logic’, the main language studied by modern logicians (though often supplemented with a modal operator or two). It is also sometimes known as LISP (or ‘list-processing’, an early language of Artificial Intelligence), and the relevance to computer science should now be clear.
The logician looks down on her object language from above (her metalanguage) in a way that encapsulates the subject-object distinction that so preoccupies us. Yet this distinction clearly embodies a massive metaphysical error, one whose contours are now becoming a little clearer. We do not ‘look down’ on reality from above. We are part of reality, and our cognizance of it (and of ourselves) is more akin to a kind of ecological interaction between equals, rather than the asymmetrical looking down by a master to a slave, a relationship that is particularly inappropriate if the objects of study include other people (for example, other studiers).
It may still be wondered just what all this means, but some further examples may help. Another thinker but from the Franco-German, continental tradition, a distinguished playwright and novelist as well as philosopher, Jean-Paul Sartre (1905–1980) has been labelled as an advocate of ‘existentialism’ (about which, more later), but his account of ‘phenomenology’ (roughly, the study of how things appear to us) goes beyond the /hey-girl/ and /high-digger/ expositions. He gives us a marvellous illustration of what the collapse of the subject-object distinction amounts to with a very famous image. This is of a man with his ear pressed to a door (to where?), looking intently through the keyhole – who then becomes aware that he himself is being watched. A dramatic phenomenological shift is involved here, and there has been much discussion as to just what it is.
I should say that my interpretation of Sartre’s image is my own, and should not be treated as authoritative. I am unaware of any standard view of it (or of its relationship to the subject–object distinction) to be found in the Anglophone, mainstream analytical tradition; and I regret to say that the chief British contribution to this debate has consisted of a polite scepticism. The feeling is that even a Frenchman, nosy parker that he is, would have difficulty in looking through a keyhole and having his ear pressed against the door at the same time. Alas, one suspects a certain superficiality in the Anglo-Saxon mind-set, a feeling that, in philosophy, clarity is not enough. But no matter.
1.3.4 Some more imagery
Suppose you are about to perform your carefully choreographed gymnastic routine in front of the international audience you have been both looking forward to seeing, and also dreading to see (you will have mixed feelings, and for obvious reasons). The judges will carefully observe every detail of every move, and deduct points for every error, save for the most trivial (they are not so cruel as to make it wholly impossible to get full marks). At last, the buzzer goes, and you are on your own. After a few perfect moves, you make a fatal mistake which ruins your chances of doing well. What do you do?
Well, in rehearsal, you simply swear silently, stop the performance, and start again. Do not waste valuable time and energy continuing a flawed performance. Should this be a cinema filming, either you (if you are self-confident) or the director (if you are not) will shout ‘Cut!’, and the filming will stop. After the usual pause, with its traditional forced bonhomie (‘that was wonderful, daahling, but just one more time …’), the clapperboard clicks again to announce Take 2 (or 13, or whatever it is).
But if this be a theatrical performance (as opposed to a rehearsal), you had better not do this. The show must go on, as they say – unless you really do break a leg, in which case the stage manager may inquire whether there is a doctor in the house. Understudies don’t get paid much, but find it useful to be available at all times (rather like academics on fixed-term contracts) and it may happen that, even if you get carted off in an ambulance, the show still goes on. Notice, thus, another importance difference between cinema and theatre.
In the case of your real gymnastic routine, it is clear that the show must go on. You must recover your poise, and – if you are really clever – make it look like a deliberate move in an unconventional routine. After all, if this is a free-style performance, your judges may not be able to work out what is what.
Imagine it instead to be a free-style figure-skating performance, if you want to make my point clearer. There is more scope for the imagination with all that twirling and leaping about. Just avoid falling too hard on the ice, as it is difficult to include that regularly in a routine.
Now, suppose you make a mistake in real life. There is, famously, no ‘undo’ button analogous to what you have on your computer, something you can press should you accidentally delete half of your document. Should you send an infelicitously worded email to a colleague, you can (with some systems) press an ‘un-send’ button; but sometimes not. And once you have actually said (out loud, in real speech) the wretched words, that is that. You can apologize profusely, and that might do the trick, but you cannot unsay them. You simply do not enough power over your routine, so to speak, mapped out (as it is) in space and time. Act from outside the spatiotemporal framework, and you might get away with it, but we do not do that. And, unless you are called Immanuel Kant, you will probably not suppose otherwise. But more of that later.
What is common to these examples is that subject and object become, or are capable of becoming, dissociated. What we see ‘out there’, and the act of our seeing it, are not unified. In epistemology, this leads to a kind of hopeless scepticism: no amount of seeing-activity ever gets us to grasp the object out there which is seen. Seer and seen do not combine into an organic whole.
How do we break the invisible glass floor separating seer and seen? Words fail us, or appear to! Kant, as we shall see later, talks mysteriously of ‘the transcendental object = x’, but commentators cannot agree on what he meant, or even whether he still believed that there was such a thing when he wrote his first Critique. We understand it better in art, however. With some paintings – where, for example, the artist is himself depicted painting his subject, we are encouraged not just to see a thing, but to see ourselves seeing that thing: the whole organic phenomenon is thus up for inspection.
For example, in (perhaps) the world’s most famous painting, Las Meninas (we have all seen it), the Infanta herself is directly seen, brightly lit, in the foreground; but we also see, in the background, an artist painting this same Spanish child-princess. And this draws the eye in impossible directions – hence the point of the painting.
Diego Velázquez (1599–1660) could hardly have been influenced by 18th century German idealism, and the court of Philip II of Spain was not known for its philosophical sophistication; but the essential idea is nevertheless there. What we see and how we see it are inseparable, but our ordinary ways of thinking tend to obscure this fact. (The artist himself would probably have regarded all this as an absurd piece of over-thinking, had anyone troubled to ask him; but no matter.)
It is not just the visual mode of perception that allows for this phenomenon. A famous work by John Cage (1952) is called 4’33”, and it consists of 4 minutes and 33 seconds of silence. You might think that this is just a swindle, but the idea of silent music had been discussed for many years before then. Moreover, the idea is not as minimal as it sounds: the point is that you get a group of musicians, dress them formally, and have them just sit with their instruments at the ready. You need to be in the audience, also dressed appropriately, and listen carefully to the silence (punctuated, as it will be, by a few coughs and giggles) You need to wait the (it seems) agonizingly long time for the conductor to end the performance. Inevitably, you find yourself not just hearing nothing, but also hearing yourself just hearing nothing. This is seriously spooky stuff, as you can imagine.
Thirdly, here is a (I think, hypothetical) example of my own from yet another artistic mode. Suppose you are specially invited to come to a fancy private gathering and hear some very special, very avant-garde poetry. Exactly why you should be such a special guest is unclear; maybe you are the chief benefactor to some relevant charity. Anyway, you are there, dressed in your best finery, waiting for the main event. As a special privilege, the Poetess herself will recite the poem, having first graciously explained to you and your friends – non-experts in this sort of thing – what it is about and why it is written as it is. She duly does this, and talks professionally about what is to come – but in an increasingly bizarre manner. You look puzzled, as you would; but then you catch her eye and instantly get the joke. The words are not about the poem to come: they are the poem!
One more example. In the early 1970s, a BBC television series was created (with an excellent cast) called Colditz. The title refers to the infamous castle in Germany which housed, during the Second World War, the most stubborn would-be escapers in all the British armed forces (these officer prisoners-of-war were drawn from the army, navy and air forces, by the way). The German guards were portrayed respectfully as ordinary human beings, and the Commandant (a very decent, if sometimes torn man) frequently had to explain to his visiting superiors from Berlin that ‘here in Colditz, we walk a tightrope’. As he himself agrees, the duty of every officer is to try to escape, as is his to try to prevent this; and adds that he (the Commandant) would do things strictly by the book. National stereotyping, yes – but the book in question is the Geneva Convention, about which we hear a lot.
Anyway, to cut to the chase, one of the officers eventually does make it further than the others to safety, but there are problems. Common sense indicates that would-be escapers head in a north-westerly direction if they wish to return to Blighty (as the UK is sometimes called), but that is not feasible. Instead, they need to head south, using such false documentation as they can create, and hope to reach that famous landlocked confederation that somehow managed to retain its neutrality. We see, at the end of the episode, our hero crawling hopelessly through the snow and meeting up with a figure dressed unmistakably as a border guard who has (apparently) been waiting for him. How far is it to Switzerland, gasps our hero, defiant to the end. The guard, however, just smiles at him and says, with only the hint of a guttural accent, that he has been in Switzerland for the last two miles.
We are in the world we observe, and not just observing it from afar (though we may be doing that as well). But coming to realize this involves a kind of journey, a kind of attitude-switch that is hard to characterize. It may be a short journey, of course; simply getting off your high-backed chair (you are not a wallflower), and just leaping onto the dance floor without waiting for a handsome young officer to offer you his hand, is perhaps another way of doing it – though might well have been misinterpreted had Natasha (the young heroine of the rather lengthy Russian novel alluded to in 1.1.4) done so in her world – with or without the encouragement of her half-sister, Sonia.
But I see that I am being distracted again by erotic imagery from within – it happens – and it reminds me of another important phenomenological fact, namely that my stream of consciousness (at least, the visual side of it) consists largely, if not wholly, of fleeting pictorial images rather than thoughts as such. If you want to know what I mean by that, I can only say – once again, switching (helplessly) to a schoolmasterly preoccupation with public language – that the images are best represented by coloured pictures, whereas the thoughts have their contents revealed in (dull, monochromatic) whole sentences (main verb and all). The difference between a picture and a sentence is clearly huge, and is not helped by the apparent intuitiveness of the phrase ‘the picture theory of the proposition’, a theory that I must inform you was allegedly held by Wittgenstein in his younger days, when he wrote (whilst an Austrian soldier in the trenches) on spare sheets of paper his early masterpiece whose English translation is the Latin noun-phrase ‘Tractatus Logico-Philosophicus’.
The manuscript survived his capture by the Italians: if you are now seriously confused, I should remind you that the Italians were on the Allied side during the First World War: it was to be a few years after the Armistice before Mussolini took over and sided with an Austrian ex-corporal of whom you might have already heard.
I can see – through the text (how else?) – that your eyes, dear reader, are glazing over, and that you are now wondering just why this is relevant. Are we going to be told (sigh) that Wittgenstein and Hitler were related in some way, perhaps even that the former just made the latter hate the Jews? If so (I hear you say, albeit a trifle sanctimoniously), then you should be reminded of the well-known adage that the first person to mention the Nazis is invariably the one who ends up losing the argument. And is it really true that the two Austrians went to the same school in the same small town of Braunau-am-Inn (in the province of Tyrol), you ask me cynically? Well, Wittgenstein (himself a Jew, you will recall) and Hitler were both born in 1889 and did both live for a while (whilst they were young) in this small town. But before you get too excited, and claim (as some do), that the author of the Tractatus (as the book is usually called) was single-handedly responsible for the Holocaust, I should say that the matter has been carefully examined by experts; and there is no evidence that the two men ever actually met.
1.3.5 Interlude: a word from our sponsor
What is telepathy? A simple definition is that it is a kind of direct transmission of thought from one mind to another. However, this clearly needs development. We need to know what sort of mental phenomenon or activity is going to count as a ‘thought’; and – crucially – just what sort of transmission is involved. Should the latter turn out to be nothing more exotic than frequency modulation of electromagnetic radiation signals – miniaturized two-way FM radios concealed within the skull (perhaps) – then we are unlikely to be impressed. Should a çi-disant pair of telepaths turn out to be so-connected, then they would be dismissed as frauds, pure and simple; and possibly open to criminal prosecution, should their deception be financially motivated.
Well, so be it. But suppose we now appeal instead to what, in my own private language, is called QFB. That is to say, we appeal to what physicists call non-locality, a product of a certain kind of microphysical entanglement. This phenomenon has sometimes been said to mean the end of science, since it allows for the possibility that any given (supposedly closed) physical system could be influenced undetectably from outside. Could this not provide us with the relevant telepathic medium?
Well, one sceptical response is scientific in character. This aspect of QFB (Quantum Funny Business, to you) has been investigated intensively, and the prevailing view (I think) is that this won’t work. Statistical ensembles of particles just do not function appropriately enough to ensure that information could be transferred, faster than light, from one pair of entangled systems to another. Future developments in our theory might lead physicists to change their minds here, of course; but we must rely on our current best knowledge if we are to be true scientists. So, probably not.
There is a lot more to be said, of course, and I shall shortly start to say it. But first consider how extraterrestrial intelligence in general, and telepathy in particular, is depicted in creative art. After all, we have already seen that the aesthetic imagination extends way beyond the narrowly intellectual sort.
Well, you need only do an internet search of telepathy + images to find many examples. You might, however, also want to know about my own personal favourites, if only because understanding a text must have at least some connection with gauging what went on in the mind of the author. We may say this with some confidence, notwithstanding the scepticism with which aestheticians tend to greet such claims (they talk here frequently of the intentionalist fallacy, about which, more later).
I shall also consider many depictions of alien intelligence and how to communicate with it, but a particular favourite is mine is the official video of the song, ‘Calling occupants of interplanetary craft’. After a hilarious introduction, featuring an inane radio DJ talking to an alien invader through a phone-in, the song morphs into a soulful ballad starring the delightful Karen Carpenter who whispers encouragingly that we come in peace. In this instance, ‘we’ refers to a variety of intelligent aliens of exotic appearance.
To counteract the saccharine quality of all this, and to do justice to the fact that when the West (as it calls itself) explores foreign lands, it usually does not come in peace in any ordinary sense of the word, I also suggest the official video of the 1996 film Mars Attacks!, a zany send-up of a whole genre. With a brilliant cast and superb dialogue, the plot of this comedy-drama unfolds at high speed, and the evil, strutting Martian overlord is depicted in a way that I (at least) cannot easily put into words.
My original model – my Ur-Text – remains, however, Wyndham’s The Chrysalids which, you may recall, takes place many years in the future, long after God brought ‘Tribulation’ (presumably, a disastrous nuclear war). Species-mutation is widespread and frowned upon hysterically in the small, fanatical communities of Labrador in eastern Canada, close to the dreaded Badlands to the west (where things still glow in the dark). It is here (the communities) that the main action takes place. A few normal-looking children are able to communicate directly with each other, but learn instinctively (from an early age) firstly, that most people just cannot do this; and secondly, that it would be wise never to talk about this ability to those who do not have this ability.
Indeed, the hero (and narrator) of the tale informs us, at the outset, that he received a severe talking-to from his father (the chief villain of the tale) for telling him that he had just dreamt of a large, populous city with strange vehicles flying around above it; which was odd as he had never seen or heard of any such thing. He got away with it this time, however, since even the town’s elders knew that they could not really control what people dreamt.
Later, however, he was caught talking by himself (and, ostensibly, to himself) by his uncle, a genuine friend and a respected man within the community, but one with no love for the intolerant cruelty that pervaded it. After learning that the hero was not just babbling away, but was really having a two-way conversation with his slightly older distant cousin, Rosalind (who was some miles away), he was intrigued, but troubled. He asked our hero whether it was within his power to stop this dangerous activity, but was told that no, probably not. He (our hero) was nevertheless persuaded by him (his uncle) that he absolutely must not under any circumstances talk about this ability to anyone – anyone – however trustworthy they may appear to be. He must, therefore, learn to communicate his thoughts silently; and must tell all the others to do the same. He duly does this.
And the phenomenology experienced by these think-togethers (as I shall call them, for the moment: the word ‘chrysalid’ never appears in the text itself)? We only learn something when our hero’s much younger sister, Petra, who had previously shown no signs of much interesting mental activity, suddenly gave out a terrible continuous ‘screech’ of anguish, one which went on and on and on. Exactly how the think-togethers knew that it was Petra, and where Petra could be found (she was a few fields away, and out of both sight and earshot), is left slightly obscure. And the other helpers (already on the scene) were also rather suspicious as to why the team should just conveniently arrive as they did. But they did, and they found an inconsolable, and slightly injured Petra by the body of her pony, who had just been killed by a wild animal. Petra is one of us! The think-togethers were startled, but had to accept the unarguable.
Seeing that Petra posed a potentially deadly security risk, Rosalind took charge. Realizing that even a simple account of what thinking-together amounted to would just go straight over Petra’s head, she instead persuaded her to close her eyes and look carefully. Rosalind then created an entertaining scenario of ducks and ducklings swimming around and having fun, with various water-plants and other accoutrements forming a backdrop. Petra chuckled with delight, and then opened her eyes in slight dismay. But where have they gone, she asked plaintively? Rosalind smiled, and explained that think-pictures, as she called them, can’t be seen in the ordinary sort of way.
A little later on, and Petra learnt who the other think-togethers were. She could not only name and distinguish them in the obvious way, but could recognize their inner call-signs (though she did not use that phrase). She then asked, as a matter of interest, who the others were. Who do you mean, asked Rosalind, puzzled. Where are they, the others asked? A long way over there, replied Petra, pointing in a south-westerly direction. A long, long way away, over the sea. But there are many of them, a great many.
After exchanging wild glances with each other, the friends all stared more closely than ever at Petra. She still had only a rudimentary ability to articulate her own thought-pictures, but had an extraordinary intensity. Like a singer who is hopelessly untrained (and possibly slightly tone-deaf), but who could nevertheless bellow out a song that could be heard across the auditorium without the need for anything as crude as a microphone, Petra could make her inner voice heard. Perhaps, she also had a highly developed receiver-mode and could ‘hear’ something that, unlike Joan of Arc’s voices, was really ‘out there’: or so Rosalind and our hero together surmised.
But where? If not over the rainbow, then evidently somewhere across the ocean in the southern hemisphere, an area that the friends knew little about (most atlases had disappeared, or had become seriously corrupted, since Tribulation).
More of this soon …
1.3.6 Drawing the threads together
We have seen one, scientific (and therefore commonsensical) sort of response to the flat assertion: telepathy is impossible (and that’s that). However, there is another kind of sceptical response, more philosophical than scientific, which is more interesting in some respects. That is that we (definitely) cannot say that telepaths might exist, because we simply don’t know what the word means. The concept is poorly constructed, and in consequence, the matter not (yet) open to serious investigation.
Now, I am not entirely convinced by this lack of conviction; and to see why, let us consider, not the word ‘telepath’, but the word ‘unicorn’, and recall some of the earlier remarks made by Kripke in his magnum opus.
As far as we know, unicorns do not exist and never have done. And what’s a ‘unicorn’, a child might ask, never having heard the word? Well, we say, it is a horse with a single horn on its forehead. It is a dazzling white colour (usually), is exceptionally shy and tends to bring good luck to anyone who nevertheless does manage to see one. Can I have one then, please, says the child (naturally); and we duly buy her a cuddly unicorn from the local toy shop. Such toys exist.
Only, of course, real unicorns do not. And what do you mean by that, sayeth the stern voice of the ontologist? We mean, cometh the very confident reply, that if you trawl through the universe with your descriptive net, you will not find anything that can get through both the is-a- horse and has-horns meshes. To put it more simply, no entity is both a horse and horned. Frege’s language, the Begriffsschrift, was designed largely to enable us to formulate these ‘negative existentials’, as they are called; and it does this most successfully –though we prefer nowadays to make use of what is called the existential quantifier (a backwards E), and a simplified formalism, in order to translate the relevant sentence into logic symbols.
Anyway, so far so good. The upshot is that, until Kripke came along, most philosophers were happy to say that unicorns do not exist, and know perfectly well what they meant. But … they (unicorns) could have done, they added. We mean that in the metaphysical, not the epistemic sense (having learnt Kripke’s distinction – see 1.2.3, if you need to be reminded of what that is. Horned horses are not a biological impossibility, and the surly rhinoceros, neither elegant nor smooth of skin, has some of the features attributed to the mythical unicorn – and has no trouble in existing. Possibly (epistemically), there even were such creatures, but they died out leaving no trace of their ever having walked this earth. A nice cartoon has a picture of Noah’s ark sailing off into the storm, leaving behind a tearful pair of unicorns standing on a very small island of dry land. We all understand what it (the cartoon) means, even if we prefer Darwin to the Bible when it comes to explaining biological diversity and its limits.
But do we really, really understand it? Suppose, to continue the fantasy, there were two floods, the first of which killed off one lot of unicornoids (call them the A-unicorns); and the second (some years later, Jehovah having decided that humanity had still not got the message) the B-unicorns. Now, can we say that we understand the original cartoon, and that it is unicorns that are depicted, if we cannot say whether it is the A sort or the B sort that we are talking about (or trying to talk about)? The two represent not just different varieties (like white swans and black swans), but quite different (if superficially similar) species – or so I stipulate (it is my example, after all).
Biologists sometimes talk of ‘animal space’, the space of all (metaphysically) possible animals that could have evolved (on this planet – never mind the exobiologists, for the moment). Life on earth evolved along just one route through this multidimensional space (in the abstract, mathematical sense of the word). Whereabouts (outside the route) are the unicorns to be found? The trouble is that horny equine-looking creatures appear in many different places in the space: and there is no principled reasons for preferring one over any other when it comes to determining the reference the common noun ‘unicorn’. The upshot is that when you say ‘Unicorns could have existed’, you fail to articulate an unambiguous thought. The sentence therefore cannot express a truth.
This all looks highly fishy, needless to say, and Kripke cheerfully agreed that he could never get anyone to accept his story. But no matter.
Now, the point is that the word ‘telepath’ is equally indeterminate. To stick with animal spaces consistent with our current basic laws of nature, we might perhaps find creatures with a sort of hive-mind (as it is sometimes called) who have no need for a public language as they can communicate directly. The individual organisms have no independent reality outside the group, any more than do quantum-entangled particles, single hemispheres of human brains, or hyper-intelligent Midwich cuckoos (more about them, later).
Before your sceptical eyebrow gets raised to its upper limit, however, I now quickly point out that there (metaphysically) could be (and probably are) several different possible species that could be described as having telepathic powers of this type. Therefore, the term ‘telepath’ is semantically ill-formed: from which we may draw the corollary that ‘Telepathy is impossible’, and so cannot be said to express a truth. John Wyndham’s fantasy worlds therefore cannot be criticized by philosophers (though empirical scientists might be encouraged to do so).
At the very least, we should retain an open mind, even if we can agree that it would be extremely unwise to attribute any actual day-to-day phenomenon to telepathic interference. Likewise, spoon-bending by the mere exertion of will is obviously a fraudulent non-phenomenon; but it is not obviously a logically incoherent one. It doesn’t happen, and we know this – but nevertheless, it (metaphysically) might have done. This is not a wholly ludicrous idea.
1.3.7 Running out of time
I still need to get some sleep before tomorrow’s meeting, which (you will recall) is to happen first thing in the morning. It would be nice to suppose that the drawling one was thinking of the morning on what they call ‘Pacific Standard Time’ (which would give me 8 hours extra sleep), but that would be to take a kind of risk that not even I am willing to take. So, I must end this Overture soon, and ensure that the magnum opus for which it is the overture is clearly delineated (if only in my mind: not even Californians are so impatient that they expect publishers actually to produce their printed work in a matter of minutes).
So, what is this magnum opus? I have decided to call it The Philosopher at the Gates of Dawn, and you might by now have some inkling as to why. The phrase, ‘The Piper at the Gates of Dawn’, is the title of the first album of the celebrated British rock band whose name suggests something pink and oily (though it is actually a boy band, albeit with occasional girly backing vocals). The seriously weird and psychedelic songs (mostly, very concise, however; far removed from the rather self-indulgent material of the mid-1960s, with its raucous shrieks and unending electric guitar solos) evoke an essentially English childhood innocence. (My favourite track on that album is called ‘Flaming’, by the way.)
The main creative genius behind this band’s early history tragically died young (of a drugs overdose – he was a troubled soul), but the band flourished still in his absence. His memory was shored up by a lengthy ballad written in his honour (where he was referred to as a ‘crazy diamond’), which begins with a brooding chord in G minor starting from silence and gradually increasing in volume. I recall playing it at a gig or two (I used to play the keyboards in a band called ‘Stranger’, would you believe?), and I remember thinking how important the key of the song is. The middle note of the chord is a B-flat, which is a tritone away from E, whose major chord is that which is most natural for stringed instruments, given the way that they are tuned; and this ensures its sinister dissonance, a clear harbinger of something very strange on the way.
But I see that I am rabbiting away again, and this reminds me. With my Academic Integrity Officer’s hat back on again (albeit at a jaunty angle), I should inform you that the original album title mentioned above is exactly the same as that of a chapter in a classic children’s book written in 1908. You have all heard of one of the characters – whom I shall not name, an upper-class amphibian hooligan with a penchant for what UK criminal law calls TWOC (‘taking without owner’s consent’) – but you may not have read the actual book in which he appears. (Exactly why non-human joyriders should get such a favourable press is beyond me: don’t read the book if you are easily corrupted.)
The chapter that I am concerned with, however, deals with a self-contained story involving a horned pagan deity who is also a musician of sorts (the panpipes, actually). Terrifying, but kindly, he makes his presence felt to show our heroes his rescue-child – a rather complacent (but thoroughly innocent) baby otter for whom our heroes had been desperately searching all night. So now you know.
Content of major oeuvre? Obscurity is often a product of extreme compression of style, so I have decided to divide the work into three largeish parts, each corresponding to one of Kant’s Critiques. Critique-the-first concerns what to think; Critique-the-second concerns what to do; and Critique-the-third concerns any other business. I shall also prefix this trio with a Dedication and lengthy Prolegomenon, as I shall call it.
Now, in the original first Critique, in the famous chapter called the ‘Antinomy of Pure Reason’, the prose does something that must have infuriated the original publishers (and type-setters), namely divide into two parallel columns, each writing semi-autonomously. We have here the famous Thesis and Antithesis (whose completion, the Synthesis, gives us the celebrated ‘dialectical wheel’ – which /hey-girl/ was so obsessed with).
I shall say more about antinomies (which are, roughly speaking, pairs of equally good-looking arguments that lead to opposite conclusions) later on. My immediate point, however, is that for the most part, Kant’s style (like that of most people) is linear and therefore one-dimensional. Page numbers are ordinary natural numbers, and we do not have any Gaussian integers (as they are called) that would be required if we were to arrange thoughts into a two-dimensional, matrix-like structure.
Does this matter? Yes, because my magnum opus is seriously ambitious and covers just about everything intellectual (and emotional and volitional, for that matter). In order to cover all this ground without tedious repetitions, and ensuring that consecutive topics are always relevant to the original ones (so that we have a natural, continuous flow of ideas without indigestible leaps from one thing to something wholly unconnected), we need a route-map through a multi-dimensional mass of loosely connected thoughts. Mathematicians working within that branch of their subject known as Combinatorics talk of the ‘travelling salesman problem’, and you might be able to guess what that is. The route map will need to be a bit eccentric, but I shall do my best not to bore (or overwhelm) the reader.
If you are still a little unsure about what I mean, consider how to organize books in a Library. Unless, you have a fantasy place, full of secret passages into hidden dimensions, you will be limited as to structure, even if there are no limits as to size. The problem is one of numbers. You are probably familiar with what is called the Dewey-Decimal system, which classifies books numerically according to topic. The idea is to enable librarians to place books together in a logical way, so that books on either side of the one you pick can be guaranteed to be on a similar topic; and conversely, that the books on a similar topic you may be looking for are to find nearby (and not in another part of the Library altogether). But numbers are one-dimensional, and bookshelves have a limited possible geometric arrangement, and we all know that things are not always ideal.
If all this sounds too high-brow, then wander not about your local library, but instead around your local supermarket. You will probably need to, unless you can persuade the servants to do it for you. Now, we all know that what we want can be found if we look in the obvious places (the signs above the numbered rows are reasonably informative); but nothing is perfect. If I want an exotic brand of ginger marmalade, for example, do I look at the jam section (Americans call it jelly, by the way)? Or at the ginger aspect of the matter – which is in the exotic spices section (miles away)? If you are too shy simply to ask someone who works there (they know everything – unless they are new to the job, of course), you could find yourself wandering around hopelessly, as if in a labyrinth.
Now, in practice, we manage to shop reasonably sensibly, without retracing our steps all that much, though it takes some considerable mental effort to keep on top of things. I had a similar problem designing this book; with you, dear reader, in mind as the one I most need to please. You need to find things, and you expect consecutive paragraphs to be logically connected, but it does not always work out as we should all want. Just bear with me.
Now, you think that the prose has enough problems as it is; but take it from me, the complications are only just starting …
1.3.8 Neachy is peachy, but Froyd is enjoyed
I shall now talk a bit about the semantics of metaphor. Why? Because we have to understand conceptual engineering, as it is sometimes called: namely, the deliberate modification of existing concepts so that they fulfil a purpose that we have laid out in advance, but which cannot be fulfilled by existing concepts. This can sound impossible in principle, and the problem goes back to the Greeks who suspected that new ideas just cannot be taught (you need to know already how to handle the new instructions).
I shall, in good time, explain how you can fashion new ideas from old in a systematic way of which engineers would approve, but in the meantime simply note that the most immediate way of explaining a new idea is by using metaphor to extend an old idea into a new domain. I think that pretty well all abstract concepts started out that way (witness abstract itself). Look at any piece of abstract prose and ask what each complicated word actually means. If you cannot easily tell, consult a good dictionary that gives you its etymology – i.e., its origin (from old Anglo-Saxon, Latin, Greek, Sanskrit, or whatever). Then ask what the word (or sentence containing it) would really, really mean if we ignored the metaphor and simply read things literally. The effects can be startling.
The point is that nearly all abstract words are dead metaphors, that is to say, figures of speech that have been calcified. Decalcify the words, and the metaphors come alive again, with stunning (and sometimes very disturbing) consequences. Imagine an overly ripe cheese that would taste delicious if only it would keep still; for one cannot help but notice the minute wriggling (that become really disturbing when viewed under a microscope). Or consider Susanna Clarke’s fantasy novel, Jonathan Strange and Mr Norell, which featured a magician who managed (from a distance) to persuade the stone statues at York Minster to come alive and talk about their feelings (and move around a bit) for a while (all whilst remaining stony). Just read the book if you want to know how it is done. If you want an author a bit less overpowering, read George Orwell’s ‘Politics and the English Language’ again, and note his strictures on stale cliches – phrases that become ludicrously inappropriate if taken literally.
You might also note some strange and disturbing things: for example, that ‘fact’ and ‘fiction’ have the same root: in the Latin verb facio, meaning ‘I do’ or ‘I make’. May we deduce that the fact/fiction distinction is fraudulent? Well, we might be accused to committing what is sometime called the genetic fallacy if we do; but what’s that when it’s at home?
The continental philosopher (whom Anglos used to fear as the ultimate irrationalist –though he is rapidly gaining in respectability), that we need to consider here is Friedrich Nietzsche (1844–1900). His name rhymes with /teacher/, by the way. A remarkable man (and tortured soul), he was the youngest ever full professor at the University of Basel (in north-western Switzerland), at 21 years of age. His subject was Philology, and we naturally want to know what that is (it literally means ‘love of words’). We now talk of historical linguistics, but essentially, philology is about the origins of words, about tracing their roots in other languages (that may no longer be extant). Trace the words back to their beginnings, and you uncover what they really, really mean, though this original meaning may be horribly disguised and corrupted by wild metaphor followed by stale usage.
This can revive and refresh our understanding of what we say and hear – and, in particular, what we read (for then, the actual words with their actual spelling are plainly visible). When confronted with a word with an abstract meaning, but the product of a dead metaphor, the original literal meanings come alive – often in a highly inappropriate and comical way. Our ideas can explode and create a range of images which generate yet new ideas, and so forth.
This can mislead as well as illuminate. Take, for example, the English word ‘nice’. It is as ordinary as a word can get, and is most often used to indicate a bland kind of goodness, something merely to be contrasted with ‘nasty’. But it was not ever thus. Originally, it meant something more like ‘subtle’ (or even ‘devious’), and we still use the word in that sense when we talk of a ‘nice point in logic’, for example. Now, if I say (without sarcasm), ‘She is a thoroughly nice girl’, what do I mean? That she is thoroughly subtle to the point of being devious? Hardly: but if you are a philologist, you might find yourself thinking these thoughts.
Sarcasm and irony, if used regularly enough, can reverse the meaning of a word. The words ‘bad’, ‘wicked’ (and occasionally ‘evil’, as said of a sexy person) are a case in point: younger people, in particular, often use these words as terms of extreme praise. Interestingly, the word ‘nasty’, mentioned above, can only be used pejoratively, as far as I am aware. (I am not sure why, but no doubt there is a reason.)
Now, we might insist that sarcasm and irony are not major influences in what we say, but I am not so sure of this. I am likewise unconvinced that we can so easily dismiss such disorderly phenomena as merely ‘pragmatic’ rather than ‘semantic’ (to use a technical distinction that I shall explain later). They are just too widespread.
But to return to Nietzsche. The title of one of his most important works is usually translated as On Truth and Lies in an Extra-Moral Sense. Now, the title itself is a phrase that is likely to irk the analytical philosopher, for ‘truth’ is normally contrasted simply with ‘falsity’, and one can say something which is false without lying. One might just be honestly mistaken, after all, and issues about honesty and dishonesty belong to ethics, not to logic or the philosophy of language. But the German adjective ‘außermoralisch’, (literally ‘outside moral’) which is translated here as ‘extra-moral’, is invented. Already, word-play is happening, and this illustrates the general theme of the book – which is that language and concepts are themselves fictions which cannot describe (or even misdescribe) the world. Analytical philosophers are apt to view these asseverations as essentially obscure, if not wilfully obscurantist, but this is to do Nietzsche an injustice. Unlike the hopelessly pedantic Kant (whom he frequently lampoons to great effect), Nietzsche is a veritable master of the German language. His problem is that his writings consist all too often of short sayings, or Denkspruche, and there is no real attempt to systematize – to bring together his many bubbling ideas to produce a coherent set of thoughts.
We talk now of the unconscious mind (ostensibly, a contradiction in terms), or the subconscious (or whatever) to refer to the half-formed thoughts and feelings that peeling away the layers of meaning can reveal, and this leads us to another celebrated (but controversial) thinker, yet another Austrian Jew, but one who practised medicine in early 20th century Vienna. This, of course, is Sigmund Freud (1856–1939), a doctor who trained originally in neurology, a fact that may come as a surprise to some of the more conservative members of the medical profession, in particular on the eastern side of the Atlantic, who are convinced that all psychological diseases are ultimately organic in nature and that psychoanalysis is a complete waste of time (at best) and an instrument of the devil (at worst).
His name is pronounced /froyd/, not /friood/, by the way – but of course you already knew that.
What you – and everybody else – also know, of course, is that a certain kind of misstatement can be called a ‘Freudian slip’, something which is supposed to reveal what the person in question is really thinking deep down; and this points to a certain image of the Mind as something which is largely hidden from view. The analogy of an iceberg, whose shimmering tip lies above a huge and deadly invisible mass that did for the SS Titanic, is very often used here.
Freud is a controversial figure, and it is noticeable that you are (I think, even now) more likely to talk about him if you are a student of English literature than if you are a medical student. You might well wonder why the former should interest herself in clinical medicine at all, but there are reasons, and they relate to the nature of language, and the ‘layers of meaning’ that literary analysts often talk about, and which I alluded to above when talking about Nietzsche.
It is extraordinary to suppose that you are more likely to locate the seat of the unconscious by reading between the lines of a person’s book than you are by inspecting different parts of that person’s brain, but it is so. Humanities tutors tend to be more pro-Freud than are the scientists. Scientific scepticism arises from the suspicion that psychoanalysis is pseudo-scientific but, as we shall see later on, there is more than one reason behind this suspicion (philosophers of science worry about unfalsifiability; and psychiatrists tend to think that there are cheaper and more reliable varieties of talking therapy). But I shall say more on this later.
What I am saying now is that there are a number of good reasons to suppose that the nature of the mind cannot be completely revealed by a simple Cartesian inspection. We see much about ourselves by just looking inwards, to be sure, but not all; and what we do see can deceive us. This is important if you want to make the idea of telepathy and interpersonal thought-transference look like something that should not be instantly dismissed out of hand. (Cue for most readers to raise their collective eyes to heaven …)
1.4.1 Another word from our sponsor
You will recall (from 1.3.5) that, in The Chrysalids, the think-togethers are startled to discover that the narrator’s younger sister, Petra, can receive telepathic signals from far, far away; but that leads to the question of where this Promised Land is to be found, and of how to get to it. It also gives us many clues as to what the phenomenology of this extra-sensory perception is supposed to be like. First, it is directional inasmuch as Petra can point to the south-west and say that the voices come from over there. Secondly, it is unlike quantum entanglement inasmuch as it is sensitive to distance: Petra can tell that the source is far, far away because it is faint and indistinct. The signals are also sensitive to environment inasmuch she can tell that you have to cross a large ocean to get to the source. More oddly, she can tell that her voices are trying to spell out the words that name where they are, so our unverbalized thoughts are not entirely unverbalized after all.
Now, this is not as paradoxical as it sounds. In this text, for example, I have teasingly avoided using proper names in favour of descriptions (for the most part), but you can often guess names from clues (you should have figured out who /hey-girl/ refers to, for example, but there are subtler hints as well). All this will sound absurd to those who insist that the mind is just the brain, and therefore is safely concealed (and isolated) within the skull, but the so-called Australian materialists (recall them from 1.2.8) could still be upstaged by our hero’s precocious little sister. She insists, both that her voices are trying to spell out, letter by letter, a word that sounds like ‘S-E-A-L-A-N-D’, but also that the first letter is voiced (as phoneticists put it), and sounds like ‘ZZZ’, not ‘SSS’. This baffles the others (do not laugh; just remember how difficult it is guess names when you play charades, for example).
Anyway, you, dear reader, can probably now guess where this Promised Land is to be found, with its elaborate city with strange vehicles flying above it. It is perhaps not quite the first large island over the rainbow that you thought of, but no matter. Remember that Petra was only very young, and that little sisters can be a bit troublesome.
1.4.2 A feature-placing language
But to return to serious mode. How are we to communicate our thoughts, telepathically or otherwise, if we can only rely on what philosophers call qualitative features of what is experienced? Ordinary language contains proper names whose reference to specific, particular things outside the mind is secured by mechanisms whose nature is highly obscure, as I have repeatedly tried to explain to you, and at some length. The idea of a feature-placing language, where all names are replaced by descriptions, and where the descriptions themselves do not contain any concealed proper name (or other form of non-descriptive reference), has been explored by a very influential 20th century philosopher whom I shall eventually name. But for the moment, and for tiresome reasons of my own, I shall simply refer to him as ‘the feature placer’. The title of this section is also the name of a section in the second part of his highly influential book, first published in 1959, which I shall also not name. Chapters 2 and 4 have the mysterious titles, ‘Sounds’ and ‘Monads’, respectively.
I shall explain to you what a monad is in due course, but you presumably know what a sound is, even if you are congenitally deaf. But, suppose that you have the equal and opposite problem. That is to say, suppose that you are blind and paralysed, and therefore have to rely entirely on sounds to know what is going on, then you will know that the location of everyday things and events can be a problem (just to make life even more difficult, let us make you deaf in one ear). Spatial coordinates do not force themselves onto your attention, as happens with vision, for example.
Now, the classic problem here is: how do we reidentify sounds? You, in particular, who cannot rely on picking out individual physical objects which make the sounds in question? Suppose you hear some continuously developing melody, for example, which then fades out and then reemerges. Is it literally the same sound? Or just an identical twin (two separate recordings of the same piece of music, for example)? And can we make sense of the contrast at all? A technical distinction philosophers use here is that between qualitative and numerical identity. For example, two electrons may be wholly indistinguishable, and are therefore said to be qualitatively identical. But they are not numerically identical, otherwise they would not be two electrons: they would have to be only one. The latter sort of identity is sometimes called strict identity, and the philosopher we mostly associate with this notion is one who actually refused to accept the qualitative/numerical distinction in the first place (bizarrely enough). It was also he who talked about monads (which are sort of tiny point-souls, dimensionless, immaterial particle-oids which are distinguished from each other because each mirrors the entire universe from its own unique point of view). Well, fancy that, I hear you say.
This same philosopher (that word ‘same’ again!) is also the one who compared the brain to a factory into which one may enter (having firstly been miniaturized) to search – vainly – for the human soul, the ultimate source of consciousness. Is he from the Antipodes then, you might ask wearily? No, I reply gently: he wrote in the 17th century. He couldn’t be called ‘Leibniz’ then, you ask in near disgust? Well, actually, he could.
Another 20th century philosopher, who is definitely different from the feature placer, is Jonathan Bennett (1930–2024), though they were both (among other things) very distinguished Kant scholars in their own right and who both published very influential commentaries on Kant in 1966. I was strongly influenced by him, and in many areas of philosophy including this one, and finally managed to meet him at a conference in Cambridge when he was quite old – though still with a razor-sharp mind. I was seated next to him at dinner in Hall, and listened to what he had to say. I confess that what struck me most about his voice was its astonishing resemblance to that of a celebrated children’s television character, a foxy glove puppet called Basil Brush (whose catch-phrase is ‘Boom! Boom!’). The feature placer (whom I also met, and on several occasions during my time at Oxford) did not sound like this, even though he too had an upper-class English accent. (That is one way in which you can be sure that we are not dealing with just one philosopher under two descriptions: Bennett and the feature placer are not as Hesperus and Phosphorus. I just mention that in passing.)
Relevance? Well, Bennett also talked about the feature placer’s auditory world, but made a slightly different use of it, namely as a (rather idiosyncratic) way of explaining what Kant could have meant by his famous (or infamous) doctrine called transcendental idealism (recall mention of this from 1.2.6), and how we can make an intelligible distinction between subjective and objective phenomena. There are some logically tricky issues here discussion of which I shall postpone, but I shall make a few brief points. Suppose that you and I are in a forest and you comment on how quiet it is. There is not a sound to be heard, you say. Now, suppose that I can hear an almost deafeningly loud whistling sound in my left ear, the product of what doctors call ‘tinnitus’. Question: do I agree with you about the sound of silence here? You can see the point of the question. The whistling sound in my ear is just a subjective sound, but indistinguishable from the objective sound (which you would also be able to hear) which could occur should a wood-nymph (or whoever) decide to make such a racket on her panpipes.
The word ‘indistinguishable’ excites the epistemologist in all of us, and you can perhaps see where Bennett, the feature placer and others are going with this hypothetical, purely auditory world. Do we have a stable world of objective sounds, or just a maelstrom of subjective sounds? How do we tell? We have all heard the riddle which asks: if a tree falls in a forest and there is no one there to hear it, does it make a sound? The short answer, surely, is ‘yes’ if you mean an objective sound, but ‘no’ if you mean a subjective sound. However, you only have to think a bit more about this to see that it is still not entirely clear what you mean. A causal explanation of how objective sounds can give rise to their subjective cousins might help here, but some philosophers (about whom, more later) have disputed this.
Anyway, what I am getting at is that, if we consider again our original topic, namely the phenomenology of telepathic signalling and ask a few obvious-sounding questions about the nature and provenance of such signals, then we run into difficulties a bit sooner than we might have expected. The moral is to think twice before dismissing telepathic reports as obviously fraudulent (at best) and utterly confused (at worst).
I should also add that this whole debate about feature placing has a deep connection with our original topic, namely the theory of reference and the relationship between proper names and definite descriptions. I have mentioned recently the names of two early modern philosophical greats, namely Leibniz and Kant, and you might cynically fear this to be little more than opportunistic name-dropping, but you would be wrong. I shall later argue – and completely seriously – that a major debate between Russell and the feature placer (the latter earned his philosophical stripes by provoking a row with the former about the nature of reference, by the way) is essentially a debate between Leibniz and Kant, and that issues that appear to concern only metaphysics and epistemology also have considerable relevance to logic and the philosophy of language. But more of that later.
I can see that you still have your Fraud Squad hat on, and wish to know – yet again – whether I am being serious about telepathic signalling. The answer is ‘yes’, but I add that I am not going to claim to be able to bend spoons for real, for example, something which you had better not claim unless you want to be instantly expelled from the Magic Circle, as it is called. There is magic of sorts here, to be sure, but it is more in the style of Jonathan Creek rather than that of Harry Potter – to mention two fictional names from contemporary British popular culture (look them up, if you are unsure who they are).
1.4.3 The ghost in the machine
Why should it have ever been supposed in the first place that telepathic communication is impossible – or, at the very least, very peculiar? I have mentioned before the father of modern Western philosophy, but now it is time to give him a name, RenéDescartes (1596–1650). The surname is pronounced /day-cart/, by the way, and the adjective from it is ‘Cartesian’, not ‘Descartesian’, contrary to what you might think. You may have heard of ‘Cartesian dualism’ in philosophy, and ‘Cartesian coordinates’ in mathematics, and this comes from this same man. Now, the point is that, for various reasons I shall later explain, Descartes supposed that the mind was something essentially distinct from the body – indeed, the entire physical realm. Consciousness is, or appears to be, something wholly alien to the purely mechanistic world of corpuscles (or atoms, as we now call them), and advances in physics made by Descartes, Galileo (and later, Newton) arose because the soul-like anthropomorphism of mediaeval science was expelled from the ordinary world of space and time altogether.
Yet the question of how body and soul relate to each other remains. Descartes thought that the place where the soul (or mind) acts on the body is the pineal gland, as it is called, a small gland at the base of the brain. You may recall the name from the ravings of Fidgety Flora whom we first met, unfortunately, in 1.1.9. Now, Descartes’s hope was that reality could be made sense of by bifurcating reality into two non-overlapping regions: the mind, which concerns pure thought; and the physical realm, which is purely spatial. That way, the physicist and the psychologist could get along peacefully – and for the simple reason that they never actually got to meet in the first place.
The hope was soon dashed. The pineal gland is not an impartial junction box linking matter and mind, but something unambiguously material. Yet it is also acted upon by immaterial forces from the soul: for how else could the mind influence ordinary bodily behaviour? This is the central nightmare which underlies what philosophers refer to as the mind-body problem.
But why is it a nightmare? Well, consider this ordinary chain of events. When you perform an ordinary physical action (you punch me in the mouth, for example), your will (which is mental) somehow gets the brain to start firing up. It then sends electro-chemical signals down the efferent nervous system and causes various muscles to contract. This causes your arm (with fist attached on the end) to move sharply in the direction of my mouth. Look at the physical mechanisms, and you will see no signs of non-physical influences: it is just plain human physiology with no metaphysical mysteries involved.
Okay, we now have a thermonuclear war, and your body is completely vaporised. Cut to the Pearly Gates, where you (or rather, your soul) has some embarrassing questions to answer before you take your rightful place among the angels. Your prepared answer to the awkward question of just how you justify the punch in the mouth (‘it wasn’t me, mate, it was my body, especially, the fist what done it’) doesn’t seem to be going down too well with the local magistrate (St Peter, to you), however; and this is unsurprising when you come to think about it. Never mind how, it was clearly your immortal soul which initiated the sequence of events which led to my getting my jaw broken. And it is your immortal soul which is St Peter’s object of interest.
You don’t believe me? Consider this. We may doubt whether there is life after death, but you have to be singularly perverse to suppose that there was no life before it. Your immortal soul was up and about long before you left this mortal coil. What was it doing then? Answer: causing your body – in particular, your brain (or pineal gland, if you prefer) to do things that cannot be fully explained by the laws of physics. No other answer is possible.
Yet this is palpably absurd. If I were to inform you that there was a lot of poltergeist activity going on the room that you are in, and at this very moment, then you would look for obvious signs of it (chairs getting thrown around the room, and so forth). On finding none, you would just flatly declare me to be crazy. If I were to persist, and say that the spooky goings-on in question are not immediately visible because they are between your ears, and that it is the price you have to pay for being genuinely conscious – for not being what philosophers call a ‘zombie’ – then you might see fit to inform me that I am just digging myself in deeper. You need neither poltergeists nor zombies in your world, thank you very much.
Looked at in this way, the Cartesian picture is indeed absurd, even though the view that a man has a soul in his body is very old and widely held. The philosopher whom we mostly associate with its debunking is Gilbert Ryle (1900–1976), successor to Collingwood (discussed in 1.3.2) in the Waynflete Chair of Metaphysical Philosophy at Oxford. He was a major influence in Oxford after the war, and the BPhil degree he invented was widely used as a training ground for nearly all Philosophy appointments in the UK (and for many in overseas as well). He knew Russell and Wittgenstein and was influenced by them both, in particular, the former, but did not get on particularly well with either of them (though that was not Ryle’s fault – he was a very sociable man).
Ryle’s most famous contribution to popular culture is his description of a Cartesian embodied mind as a ‘ghost in the machine’, a phrase that has been borrowed by many. On Ryle’s view, the mind is not anything like a ghost, and a human body is not anything like a machine. More to the point, the whole picture of mind and body as things that can seriously be compared in the first place is rejected as conceptually muddled. He talks of a ‘category mistake’, a kind of logico-grammatical error, often called a zeugma (the standard illustration of which is ‘She came home in a flood of tears and a sedan chair’, though Ryle uses different examples).
Possibly more than anyone (even Wittgenstein and Austin), he made explicit the idea (which enraged Russell) that philosophical error is not a species of ignorance, but rather a species of confusion. We tend to get conceptually muddled, and philosophy is about getting rid of the muddles – not increasing our scientific (or quasi-scientific) knowledge. His magnum opus is The Concept of Mind (1949), a strange book written in an almost aggressively plain style, without footnotes, endnotes or a bibliography (this would not be tolerated nowadays). The initial, negative attack on Descartes is full of epigrams, and is a magnificent piece of polemic. However, the bulk of the book consists of a development of Ryle’s own positive alternative account of the mind, and is generally regarded as being wholly unbelievable, if not totally insane.
This allegedly insane view, generally called ‘behaviourism’ (or ‘philosophical/linguistic/analytical behaviourism, to distinguish it from the methodological doctrine which psychologists talk about), is that the mind is not a hidden, internal cause of one’s behaviour; rather, it just is the behaviour. There is nothing more to mental activity than what is directly observable. I should add that Ryle himself disavowed the label, ‘behaviourist’, and The Concept of Mind is a rather subtle, elusive book despite its rude clarity of style, more concerned with explaining what philosophizing in general should be about, than with focusing on any specific problem-area. It was also written in a hurry, as the author himself later admitted.
A more precise version of behaviourism is linguistic in character. It says that mental vocabulary is reducible to behavioural vocabulary in the sense that claims formulated using the former can be translated precisely and without loss of content into claims about the latter. To put it more simply, mental talk is just a convenient shorthand for behavioural talk. This talk will probably include counterfactuals, with (for example) claims about you would have said if only the boss had not been present. This caveat does much to get us past the obvious objection to behaviourism, namely that we can (and usually do) keep our thoughts to ourselves.
However, it is unclear that Ryle ever endorsed such a precise view (though there are hints in that direction). Indeed, Ryle’s actual positive alternative to Cartesianism is left seriously undeveloped, and perhaps deliberately so. It is what philosophers of science were later to call a ‘paradigm shift’ that is being expressed here, a different way of looking at things.
1.4.4 Poltergeists, zombies and the autistic
Ryle’s ideas still can seem utterly perverse, but Cartesianism yields an even more perverse result, and that is what is usually called the problem of other minds. We have already noted that, on the Cartesian view, the malicious demon could deceive us as to the existence of ordinary objects around us (they could just be hallucinations), and this is bad enough. But it gets worse; for even if I could be sure of the existence of your body (including its pineal gland), I have a further problem, namely of knowing that this pineal gland of yours is connected to an invisible soul. Could it not be that you are actually not conscious at all, but merely acting as if you were?
The hypothesis is certainly bizarre. Thus, suppose I were to inform my seminar group (as I have been known to do in the past) that one of the students present is as stupid as the proverbial two short planks. There is absolutely no intelligence to be found within her at all, indeed no mental activity whatsoever. The response, let’s face it, is likely to be one of nervous apprehension: am I going to say precisely who this numbskull is, and risk another painful meeting with a Californian on some sort of disciplinary charge? But no: I add quickly that this person passes all her exams with flying colours, is a brilliant and witty conversationalist, a delightful person to be with, and one who also writes superb violin concertos in her spare time as it happens. She behaves intelligently, but has no genuine intelligence. It is ‘all silent and dark within’, as they say. She is, in other words, a zombie (in this somewhat technical, philosophical sense of the word).
Now, the question is: can we make any real sense of this hypothesis? We are not asking whether it could be true in the sense of an epistemic possibility, but rather as a metaphysical possibility (recall the distinction introduced in 1.2.3), we might add.
Now, Ryle would not have understood this distinction (I think) – or not easily. And he would certainly have insisted that the whole scenario of having intelligent behaviour without genuine intelligence as involving a serious conceptual muddle. We can see his point. But subjective consciousness does seem to be a very mysterious phenomenon, something wholly unlike anything to be found in physics or the physical realm, and this gives us a dilemma. Descartes-versus-Ryle yields a conflict that is central to contemporary analytical philosophy.
I think that expert opinion is about evenly divided. A leading exponent of the anti-zombie camp, himself a former student of Ryle’s, is the American philosopher Daniel C. Dennett (1942–2024), who labels himself a physicalist. For him, there is nothing over and above the apparatus of physics, but physical things can exhibit considerable complexity and this is enough to explain what consciousness really is. In the opposite corner of the ring is a leading exponent of the pro-zombie camp, the still-living Australian philosopher David Chalmers – who insists that consciousness does not ‘supervene’ on the physical (in a sense which can be made precise), and that the ultimate laws of nature are not all purely physical. There are many other distinguished players as well, but the Dennett-Chalmers debate has become central.
The debate is rich, and impacts on many areas of philosophy. Dennett draws much of his inspiration from artificial intelligence, from the thought that machines just might one day have minds of their own in a non-metaphorical sense. Chalmers, however, is a formidable mathematical logician in his own right, and examines carefully what sort of possibility we are dealing with when we ask whether zombies are possible. The relevance of modal metaphysics to issues about telepathy thus becomes more apparent.
Anyway, as far as I know, Ryle never worried about whether telepathy is possible; but it is intriguing to ask how he would have dealt with the matter if he had. The point is that telepathy ceases to be a big deal if our ‘private’ conscious states and processes were never private in the first place. If the problem of other minds, so called, has been consigned to the dustbin of Cartesian history, then with it goes the problem of telepathic communication. It just becomes ordinary communication. True, Ryle might raise an eyebrow at the idea that one could telepathically transmit an idea across the Pacific Ocean, but in an ordinary room full of people all talking nineteen to the dozen, it can be hard to isolate a specific causal chain through the airwaves connecting a particular speaker and a particular hearer. There are some quite general problems about causation, in particular, causal transitivity, that are relevant here which we shall later explore. At any rate, note that the problem of finding a definite causal route going directly from mind to mind is not as easily articulated as you might think.
And, it might be noted, the Cartesian dualist, on the opposite side of the debate, is not exempt from this sort of seduction. If you think that the soul can exert a non-physical influence over physical things, then you are in no position to pooh-pooh the suggestion that it can exert such a quasi-magical force over another soul directly. Although not a Cartesian dualist about ‘substances’, as philosophers call them (i.e., independently existing individual entities), Chalmers is a dualist about properties, and this should be enough to ensure that irreducible (and inter-personal) psycho-psychical laws remain a possibility that cannot be dismissed out of hand.
All this needs to be developed more carefully, of course, and it will be. In the meantime, we need to look more closely at how we do figure out what goes on in other people’s minds, and it is helpful to start with those people who cannot do this, or who can only do it with great difficulty.
Autism is a spectrum disorder (i.e., it comes in degrees) that affects a significant proportion of the human race, with more men and boys than women and girls affected. At the low end, it is utterly debilitating. At the high end, it can be associated with a remarkable cognitive ability, where the neurodivergence (as it is politely called) is compensated for in various ingenious ways. It goes under various names, such as Asperger’s syndrome, but I shall use the current favourite terminology, namely autistic spectrum disorder, or ASD.
The genera idea here has taken hold in popular culture, and people often say (usually in an unfriendly way) that some rather gauche individual with limited social skills is ‘on a spectrum’. We all know what is meant, even though we may not be able to diagnose ASD to a professional standard.
What is distinctive about people with this condition is that they (usually) lack what is called a ‘theory of mind’. It is not quite that they do not realize that other people can think and feel; it is just that they cannot know this directly, i.e., without inference. Dennett has introduced into the lexicon what is now known as the ‘Sally-Anne test’, which is routinely used to measure cognitive development in young children and which people with ASD tend to do very badly at. It is about predicting other people’s behaviour, and involves the correct attribution of beliefs to others. Some high-level ASD subjects have reported a feeling of amazement at how other people seem able to ‘read’ each other’s minds in a direct, almost telepathic manner, when they (the subjects) have to make do with laborious inferences.
Well, how do ‘normal’ people know what goes on other people’s minds? There are two main theories, or groups of theories: ‘theory theory’, as it is called; and ‘simulation (or co-cognition) theories’, as they are called. We mentioned the latter in 1.3.2 when discussing Collingwood’s notion of ‘re-enactment’. The former treats children as little scientists: they work out a theory of mind, by making explanatory hypotheses involving (hidden) beliefs and desires in others, and testing them experimentally. It is a bit clunky, but it sort-of resembles the way in which scientific theorists posit unobservable entities like electrons and genes to explain observed phenomena, and it appears to be scientifically respectable. A simulation theorist, by contrast, approaches her target more directly. She imagines herself to have the hypothesized beliefs and desires and asks how she herself would act, and then compares that with the observed behaviour to be explained.
The main difference between this sort of simulation (or co-cognition) and Collingwood’s historical reenactment seems to be that the latter applies only to thoughts, whereas the former includes desires and feelings as well. Nevertheless, there clearly are parallels here which, as far as I know, have not been explored in any detail. We are evidently dealing with the borderline region between the anthropomorphic and the ‘nomological’ (i.e., lawlike) models of explanation. The point is that people with ADS have great difficulty in engaging with this sort of empathizing.
Now, imagine (if you will) a nearby possible world where ASD is the norm, but recently there have been born a few unusual children who lack ASD. They will be perceived as unusual, with strange, almost telepathic powers. The relevance of ADS to telepathy is thus made very clear.
Now, imagine further (if you will), that, in this world as a result of some sort of cerebral or genetic engineering, some mothers give birth to some anti-autistic, or super-empathic children, as we may describe them. They are to us as we are to ASD sufferers. They have an excess where the others have a deficit. Can we make sense of this idea? Unless and until we understand the neurological mechanisms underpinning these conditions in a certain, particular way, we cannot answer with a confident negative. Moreover, solidarity is strength, and such people can confidently be predicted to become mentally very powerful indeed, possibly providing us with a serious threat to our own superiority and domination over other species on the planet. Yet these super-children need not be powered by black magic. Their brains are connected with each other in quite ordinary ways.
1.4.5 The Midwich Cuckoos
But would such super-humans resemble the telepaths that we speculated about when we examined the characters in The Chrysalids? It is hard to say, since the think-togethers kept themselves secret and so were never studied by normal human beings – who could then report on their reaction to their powers, and in the sort of depth and detail that we need.
However, Wyndham’s masterpiece has an evil twin, the much more famous The Midwich Cuckoos (1957), perhaps his darkest fantasy. This depicts telepathic Children (with a capital ‘C’) that need to be studied by ordinary human beings – and very intensively indeed, both by British government scientists (in secret, naturally), and by the novel’s hero, one Gordon Zellaby (an elderly retired academic and writer – whose areas of expertise are never made particularly clear).
The novel’s title refers to the demure and rather isolated little English village where all the action takes place, and the fact that cuckoos are an example of what are known, in biology, as ‘brood parasites’. The book was made into a film called The Village of the Damned in 1960, and is possibly the most frightening horror film ever made – well up with The Exorcist, even though it was a low-budget film made in black and white, and with only very limited special effects. It had an American remake in 1995 which was less frightening but had much better special effects. It (the remake) included, in its publicity advertisement, the directive ‘Beware the Stare!’, and the idea has now passed into mainstream culture, even though its origins in the Wyndham novel (which is truly excellent) are largely forgotten. The plot is as follows:
The narrator and his wife return home from a celebratory visit to London to find an army checkpoint preventing them from entering Midwich, where they live. He is told, by the soldier in charge, that ‘nobody can enter or leave Midwich, sir, and that’s a fact!’ With some asperity, the narrator naturally asks the soldier why, and he is told – ominously – that ‘that is what they are trying to find out, sir’. It transpires that all living creatures within a hemispherical region around the village instantly fell unconscious the previous evening, and their would-be rescuers from outside likewise succumb instantly as soon as they enter the affected zone.
The military are deeply worried, and there is a total ban on press reporting. This ‘Dayout’, as the villagers later call it, lasts for exactly 24 hours, and then ends as mysteriously as it began. Zellaby, a friend of the narrator, is one of the villagers, and he too suddenly fell unconscious (and woke up on the floor, as bewildered as everyone else there).
Exactly nine months later, all the women and girls of child-bearing age in Midwich give birth to extraordinary, golden haired and golden eyed babies, all apparently identical in appearance to the last detail. These children turn out, due course, to have terrifying powers.
To begin with, they just seem odd, though highly fast-developing. However, Zellaby discovers an extraordinary fact: when one of the boys is taught something, all the other boys immediately know it also, but the girls do not; and vice versa. He duly reports this to the Government scientists based at the Grange (a disused monastery in the village) who are keeping a close but discreet watch on developments. They (the scientists) apparently perform some tests of their own, refuse to reveal their results, but develop an increasingly gloomy demeanour (they are known to the ordinary villagers as ‘the Nosies’, by the way, and are not much liked).
It is clear that the Children (with a capital ‘C’) are the product of xenogenesis, or implantation that must have taken place somehow during the Dayout. But, not only do they ‘think together’, like the chrysalids, they can both read and control the minds of ordinary people (and animals), especially as they grow older (which they do at astonishing speed: after just nine years, during which time the narrator temporarily leaves the country for Canada, they now look as if they are 16 years old).
The villagers are outraged at this threat (there had earlier been some ugly incidents), but when a group of them attempt to lynch the Children (ordinary persuasion having been proved to be useless), they are ‘willed’ into fighting and killing each other instead. A blood feud is clearly imminent.
When the Chief Constable of the county interviews one of the Children as part of the police investigation into these deaths, he is outraged by the Child’s complete indifference to the destruction that the Children have caused, together with his (the Child’s) simple assertion that ‘we simply intend to survive, and that is all’. On attempting to intimidate the boy, the Chief Constable is instead just stared at fixedly, and is reduced to a gibbering wreck, never to recover.
Zellaby, who (along with the narrator) witnesses all this, and despite having originally befriended and taught the Children, now becomes seriously alarmed. He finally realizes something that he had hitherto subconsciously suppressed: it is ‘them or us’.
We learn from a friend of the narrator, a member of British Military Intelligence who has been keeping a discreet eye on the situation from the beginning, and is now staying with Zellaby as a guest, that his superiors have concluded ‘with the help of a bit more evidence than was available to Mr Zellaby’, that the Children definitely have an extraterrestrial origin. The ‘extra evidence’ is that there are (or, rather, were) other groups of Children elsewhere in the world, and that there was a lot of UFO activity around the planet at the time of the Dayout. However, the last surviving group (outside Midwich) was in the small town of Gizhinsk, in the far north-east of the Soviet Union. The entire town had been destroyed with absolutely all of its inhabitants by a medium range nuclear weapon launched from outside the zone of influence of the town’s Children (there was, of course, no way of evacuating the ordinary civilians without the Children finding out).
Officially, this nuclear blast is just an ‘accident’, but the Soviet authorities send out, by subtle and indirect means, to all the heads of government of all the countries of the world, copies of an extraordinary letter. This letter, after depicting the Children exactly, describes them as presenting a ‘racial danger’ of the most urgent kind, and insists that all known such groups be destroyed immediately. This, it must be remembered, was in the 1950s, at the height of the Cold War; but the letter insists repeatedly, and even ‘with a touch of pleading’, that this extermination must be carried out – not for the sake of countries (or continents or ideologies) – but rather because the Children present a biological threat to the very human race itself.
Zellaby realizes that he must act. He tricks the Children into attending a private film show in an isolated building (which he supervises), but the box supposedly containing the technical equipment in fact contains dynamite connected to a ticking clock. With great difficulty, he conceals this fact from the probing minds of the Children (who have become suspicious) until the timer reaches zero. Zellaby and the Children are thus all killed in what we would now call a ‘suicide bombing’. The remaining villagers are spared.
The novel ends with a tearful Mrs Zellaby (a much younger woman who had already been expecting a child at the time of the Dayout, and so had been spared artificial impregnation) reading a letter from our hero informing her that he had, in any event, only a few months to live. The letter (and the book) ends with the curt observation that ‘when in the jungle, one must do as the jungle does’.
1.4.6 Some textual analysis
There are, of course, many layers of meaning in the text, layers which introduce doubts and qualifications, and which also provoke new and unexpected questions.
To begin with, the ethical situation in Midwich at the time in question is not all that straightforward, however it might have appeared to the ordinary villagers. For example, Christian religious authority is of little help, for it is unclear why God should favour the human race over the cuckoos. (As Zellaby reminds the local vicar, the Christian God is ‘God on all suns and all planets’). And can it possibly be right to eliminate a whole intelligent species, especially one visibly superior to our own? Should we not instead gracefully concede victory to the better rival for the earth’s very limited natural resources? And is not the law of the jungle just code for a pre-rational selfishness, something of no moral worth (or even of negative moral worth)?
There is also the question of what the text is ‘really’ about. Science fiction can be as ambiguous as any other literary genre. And 1957 saw the beginnings of rock and roll music, and of an independently thinking postwar generation less than wholly grateful for the wartime sacrifices of their elders and (supposed) betters. The late 1940s and early 1950s saw the birth of children who were prepared to challenge adult authority in a very new sort of way. Behind the facile, unthinking complacency of what was then called ‘traditional morality’ (something taken for granted by the older generation), there was a genuine fear of the unknown, a fear of losing control over the situation – never a pleasant sensation.
This has its echoes in the text (and in the films inspired by it), notably of the visceral terror provoked by the Children’s collective determination to get their own way. The sensation of having one’s own will overridden by an exterior force is just a reflection, perhaps, of the primordial ‘fight-or-flight’ instinct that one must feel when confronted with an alien species clearly bent on a hostile takeover (and regardless of their superficial golden-eyed beauty). This is more biology than psychology, and it gets to the root of our identity more directly than can the more sophisticated writings of most moral philosophers, either contemporary or from the past.
Can we find a solution to these dilemmas, one which satisfies both reason and sentiment (two human faculties that often find themselves in conflict)? It may be that we cannot, that the situation is ultimately tragic in a deep sense. And tragedy was defined as ‘the conflict of right and right’ by the philosopher whom I have previously referred to, somewhat irreverently, as /hey-girl/, but will from now on refer to by his real name, Georg Wilhelm Friedrich Hegel (1770–1831). The significance of tragedy was also later commented on by another celebrated German philosopher, Nietzsche, about whom we have already said a good deal.
What these two thinkers would have made of Wyndham, I cannot say. However, although it may be tempting to suppose that it is the so-called continental philosophers who will ultimately make more sense here than the more optimistic (if ultimately superficial) Anglos, a note of caution should be sounded. The earlier novel, The Chrysalids, is as optimistic as its successor is pessimistic, even though the underlying phenomenon, telepathy, is the same (or fairly similar); and its theme could have come straight out of the Age of Reason.
What happens? The plot of The Chrysalids continues, and not without many adventures, until the think-togethers are a few years older and finally get to meet a beautiful but unnamed older woman from (what I have been calling) the Promised Land (i.e., New Zealand), who has managed to travel to Labrador using some very rare and expensive fuel. Most of the original group then travel back with her south west to their final destination, and the novel ends with Petra ‘saying’ (very loudly) how exciting it is to see the city and ‘hear’ its many inhabitants.
The whole story is a delight to read, despite its many sad moments, and has been described as a ‘coming-of-age novel’, because the plot revolves largely around the relationship between our narrator-hero and his older half-cousin (and, later, love-interest), Rosalind. The arrival of the unnamed, even older woman from New Zealand (who makes even the elegant and self-confident Rosalind look rather gauche by comparison) adds a further layer of tension. However, the unnamed woman is deeply sceptical about the idea that all species-evolution aims at some sort of ultimate ideal, a (perhaps) divinely ordained super-species that Hegel might have dubbed the ‘Absolute Idea’, and whose members Nietzsche might have described as ‘over-men’.
We can quibble over details about how to interpret these philosophers, of course, but the central facts remain clear. It is to the general advantage of a group if they are able to think together, and not atomize into separate individuals all shouting incoherently at each other.
1.4.7 The principles of politics
Of course, even this can be disputed; and one obvious problem concerns privacy. Do you really want your so-called ‘peers’ to be able (literally) to peer into your innermost thoughts and feelings automatically and all the time? And, of course, another, much less attractive model of collective thinking is that associated with the Borg from the Star Trek franchise (about which, much more later). They are a highly aggressive, centrally controlled army of cyberdrones who wish to absorb every creature in the universe. Their war cry – ‘You will be assimilated!’ – strikes terror into the hearts of all who come near them. Totalitarianism may be better than (what is sometimes called) ‘a state of nature’, but not very much so.
At a more modest level, you will find people who avoid all social media like the plague, fearing all manner of terrible things if they contribute to Facebook and the like. Some won’t even have email or a telephone in the house. However, these fears should clearly be taken with a generous pinch of salt.
There are many issues here, of course, but political philosophy is older even than Aristotle (c. 384–322BCE), who seems to have coined the word ‘politics’ (‘politika’) in its current sense, and who has been regarded as central to the Queen of the Sciences ever since. Indeed, Plato (c. 427–348 BCE) – Aristotle’s teacher – wrote his Republic (which is often regarded as his most important work) in such a way as to imply that the best way to understand an individual soul (and what is best for it) is to consider how best to organize such souls into a good society. It is, perhaps, not obvious that the tensions between our individual psychological faculties (the will versus the intellect, for example) are best explored by looking at best to organize a society consisting of many individuals. But the issue of individual versus social is a staple of philosophy, one which affects many central problem areas. So, we shall continue to explore here.
In this Overture, I shall not attempt to give a comprehensive précis of all the topics in political and social philosophy that I shall later consider (still less, all those that could usefully be considered in a more comprehensive – but, inevitably, more narrowly focused – treatise). Instead, I just want to introduce a few central themes; and, as far as the individual–social distinction is concerned, the most important is the notion of an expressing game, something which I contrast with a more orthodox reporting game. This contrast concerns our thoughts and feelings only indirectly. More immediately, it concerns their linguistic manifestations, and it involves manipulating (some say, mangling) a central idea of Wittgenstein’s, namely that of a language game. Let me explain.
Our speech is something public and can be examined by all, unlike individual thoughts and feelings which do not seem to be (though, it will be recalled, both Ryle and Wittgenstein himself strongly opposed this idea). Whereas Locke thought that the meaning of a word was the idea which we associate with it, Wittgenstein gave us the slogan, ‘Meaning is Use’: if you know how to use a word correctly, then you know what it means; there is nothing else for you to know. Conversely, if you do not know how to use a word correctly, then you do not know what you mean by it: you are just mouthing the word inanely. No mention of ‘ideas’, you will notice.
Now, this slogan (and its interpretation) can be made to sound obviously true, even trite; but we shall see, in due course, that it is not. It relates to what has come to be known as the ‘private language argument’, an enormously influential and much discussed staple in contemporary, analytical philosophy of language. This argues, very roughly, that there can be no such thing as a private language, i.e., something that only one individual person is capable of understanding. It has a major impact on studies about consciousness and the nature of mental activity. A private language is not unlike what (in 1.2.5) we called a ‘language of thought’ (or LOT), though the precise connection is highly controversial.
What I do want to do here and now – and this explains the connection with political philosophy (the official topic of this section) – is to discuss a certain kind of social interaction, namely a dialogue between two individuals, each of whom is real and situated in a given social and political context. It will prove to be central in explaining what is meant by making a ‘value judgment’, and with how such judgments are supposed to relate to descriptive statements of fact, and thus concerns a major topic in our Critique the Second, about which we have so far said surprisingly little. But let us get on with it, without any further ado.
1.4.8 Expressing games with sartorial elegance
I’ll explain the bit about sartorial elegance in a moment, but in the meantime, just bear with me, and let us continue with the philosophy of language. Suppose we have two people A and B in a conversation. The dialogue D1 reads:
D1
A: I believe that the earth is flat
B: That’s not true!
Now, there is evidently a conflict of opinion here of some kind. A and B cannot both be right. But exactly what is B denying? There are two possibilities here, depending on which language game is being explained. If they are playing (what I call) the reporting game, then B is simply denying what A is literally saying, namely that he has this particular belief. In other words, B is saying that A is being insincere, or (if honest) is seriously lacking in self-knowledge: he holds no such opinion. To put it even more directly, A does not believe that the earth is flat. A’s words should simply be prefixed with ‘It is not the case that’, what in logic is called the negation operator (about which, more later).
On the other hand, and perhaps more naturally, they might be playing (what I call) the expressing game; in which case, B is not talking about A’s psychological state at all, but whatever it is that A is talking about. In short, B is expressing a contradictory belief: the earth is not flat. Again, the negation operator is being applied, but the target sentence has its prefix ‘I believe that’ removed first.
Now, I don’t think that there is anything wrong with either game, though it is obviously useful if both A and B can agree in advance about the rules of engagement; and what they are will usually depend on social context. Thus, if B is a geographer, then she will probably go for the second option; but if she is a psychologist, she will probably go for the first.
This might become clearer if we put a bit more flesh on the bones, and situate A and B is a more explicit social environment. Thus, if B is A’s psychotherapist, then she will be naturally disposed to view A, lying as he is on the couch in her consulting room, in a clinical sort of way; and respond to his strange utterances accordingly. On the other hand, if B is A’s personal geographer, and they are both sailing fast (in a westerly direction) towards the horizon at sunset, a rather different scenario is suggested. It is the shape of the earth, and not the shape of A’s mind, that B is being consulted about.
It might be thought that expressing games are more natural, and more common, than reporting games, and D1 was set up largely to suggest that. But most English declarative sentences do not start off with words ‘I believe that’. Things become slightly odd if we consider a different sort of mental attitude. Thus, replace ‘I believe that’ with ‘I hope that’, and the expressing dialogue also starts to sound odd, for remember that it is not just B’s beliefs but her own personal hopes that are now being called into play. Odd and, perhaps, a bit intrusive – even impertinent. Replace a narrowly cognitive attitude with one which imports emotional and/or volitional elements, and matters become even more bizarre. Thus, variants of
I believe that
include the following:
I am absolutely certain that
I know that, (and also know for a fact that God agrees with me on this point, that)
I suspect, but am not sure, that
I wonder whether it is true that
I am now actively considering the idea that
I secretly believe that
I sincerely hope that
I fear that
I wish it were true that
I love the fact that it (actually) is true that
I think ‘Yippee!’ when I contemplate any possible world in which it is true that
I think ‘Eurghh!’ when I contemplate any possible world in which it is true that
I laugh because it is true that
I would laugh if it were true that
Now, B will most likely not feel willing to enter into the appropriate form of dialogue with many of these variants in place. It would involve her engaging in kinds of thoughts and feelings that she simply does not feel comfortable with engaging in. Indeed, it is an axiom of counselling that the relationship between counsellor and counsellee be asymmetrical. With a reporting dialogue, however, there is no need for such professional restraint. This is because she can view A’s utterances in a completely detached sort of way – and then say whether he is really saying what he feels (or is just showing off).
Furthermore, expressing games seem to make no sense at all if the target sentence is simply not prefixed by any such qualifier. Thus consider a different dialogue D2
D2 (short version)
A: I am wearing a red shirt
B: That is not true. I am wearing a blue shirt.
When confronted with this sort of chatter, the natural response is that B must have misheard A. B’s opening word ‘that’ has just failed to hit its correct reference, and she clearly needs to be told to listen to other speakers more carefully. This sort of dispute is clearly going nowhere, and even Wittgenstein (who emphasizes the extraordinary diversity of our language games) would probably be brought up short if B were to persist in this sort of conversational strategy. We just cannot use words in this sort of way (we say, in horror)! I mean (you might sputter), how would this dialogue progress? How would it resolve itself in a principled and peaceful way? Well, let us continue it, and see.
D2 (long version)
A: I am wearing a red shirt
B: That’s not true. I am wearing a blue shirt.
A: Look, are you blind? My shirt is red!
B: I suppose you just think that you’re being fashionably smart, but you’re not. What you are saying is not clever, and it is not funny. You can see perfectly well that my shirt is blue. That’s spelt B-L-U-E, by the way – in case they never taught you that at school.
Okay, so the embellishments are gratuitous, but you can see my point. This is a debate which cannot be resolved, except perhaps by brute force. Rationality will fail us, and this so-called expressing game will turn out to be nothing more than an irritable shouting match (at best).
It is also quite visibly silly, since there is plainly no real locus of disagreement. The simple reality is that A is wearing a red shirt and B is wearing a blue shirt. That is a perfectly consistent set of facts, and one which nobody need dispute. (Of course, A may feel that B ought not to be wearing a blue shirt, but that is an entirely different matter; for it is now, not the shirt’s colour that is being expressed, but his attitude towards it. We have an entirely different dialogue.)
No, the reality is that sartorial facts (i.e., facts about what a given person is wearing at any given moment) can be reported, but not expressed, in my rather specialized sense. And a discipline (call it ‘sartorialics’) whose central debates follow rules such as those of D2 is clearly not going to progress very far.
At the risk of labouring the point, I should add that what makes sartorialics a pseudo-science (or worse) is not the ill temper that its debates typically provoke. Even if its disputants were impeccably mannered (and impressively open-minded), unforced (and principled) agreement is still unlikely to arise. Thus consider D3 (I borrow the acronym ‘IMHO’ from social media, by the way: it stands for ‘in my humble opinion’):
D3
A: IMHO, my shirt is red. My evidence for this is that I get red visual perceptions when I look at it.
B: Really, that’s fascinating to hear! However, IMHO, my shirt is actually blue. My evidence for this is that I get blue visual perceptions when I look at it.
A: Gosh, it’s amazing how interesting controversies can arise from such basic conflicting intuitions, and how difficult it can be to resolve them! Do you think that it might be worth organizing an international conference around it? Perhaps the Journal of Sartorialics would agree to publish the proceedings in a special issue! (Incidentally, my shirt is still red, and always has been. IMHO.)
B: Well, the journal you mention certainly has the money, and if you want a really successful conference, I agree that you had better look overseas. But, having said that, I think that the British Journal of Sartorial Theory might be a better bet. I know the commissioning editor quite well (by the way), and she is more open-minded to new and untested lines of inquiry than you-know-who, admirable though he is in other respects. Incidentally, not only is my shirt still (a very vibrant shade of) blue, but it always has been – and probably always will be. Time will tell, though, I dare say. IMHO.
Now, what we have here is at a softer volume to the shouting match that we had in D2, but is equally futile when it comes to reaching a principled consensus of opinion. We might call it a ‘moaning match’, instead. Much academic debate looks like that to the uninitiated.
Experts will have noted that, behind the chutzpah, the distinction between reporting and expressing an ‘attitude’ (it could be a thought or a feeling – or much else besides) is being alluded to. This is philosophically perfectly respectable (to put it mildly) – I mention this for the benefit of non-experts. When applied to psychological attitudes (as opposed to language games), we know (or think we know) what we are talking about when we use this distinction. And what philosophers call ‘metaethics’ is not as visibly absurd as what I call ‘sartorialics’, and we all worry about what we are doing when we make a so-called ‘value judgment’ (is the latter a kind of descriptive statement of fact, or something to be contrasted with it?)
You might still think that ethics could never be as stupid as sartorialics, however. If so, I suggest you read the admirably lucid prose of one J.L. Mackie (1917–1981), an Australian philosopher based at Oxford who was definitely mainstream – and is still highly influential in many areas of the subject. You might also come to learn what is meant by an ‘error theory’. The point is that, it is not just I who talk about such things – believe it or not.
1.4.9 On assertion and assertiveness 1: Kant
A recurrent theme in my whole approach to Life, the Universe and Everything is that it is a fundamental error to fail to situate oneself within one’s particular social and conversational environment when contemplating the world around us (or, indeed, within us). I have mentioned (approvingly) Hegel’s ‘subject–object distinction’, but it is important to see also how this error can be best understood from within the mainstream Anglo philosophical tradition – of which Hegel is definitely not a part.
So, firstly, back to Kant (as they say); for it is here that the analytical/continental schism began. Now, I mentioned briefly (in 1.2.6) Kant’s doctrine with the impressive (but thoroughly misleading) title of ‘transcendental idealism’. Some regard it as the best part of Kantian epistemology, such as Ralph C.S. Walker (a living philosopher and one of my former tutors at Oxford, whose 1978 book on Kant is well known to me – I corrected the galley proofs of it, so that is not surprising). Others, such as the man whom I shall continue to refer to, irritatingly, as ‘the feature placer’, think that it is a disastrous falsehood (at best) or incoherent nonsense (at worst).
By the way, philosophers in Germany, as well as in the UK (and elsewhere), have also been divided on this ever since Kant himself first published his first Critique in 1781 (a second edition of which was published in 1787, largely in response to misunderstandings about what he really meant by transcendental idealism). (Do not, by the way, confuse Kant’s second edition of the first Critique with his second Critique. The latter is called the Critique of Practical Reason, is primarily about ethics, and was first published in 1788. I shall explain all this further later on in this book, so non-experts need not worry too much about all this at the moment.)
So, what do I myself think about transcendental idealism – henceforth, TI? My view is that it is combination of one major insight and one disastrous error. It is therefore false (I think), but nearly a brilliant truth. Let me explain.
The major insight is what I shall call the ‘mind contribution thesis’ (briefly, MCT), and it is that our beliefs and judgments about the world around us are the product not only of external factors (light waves arriving at the retina, sound waves arising at the eardrum, and so forth), but also of internal factors, notably the way in which the mind processes such input to produce high-level cognitive attitudes (i.e., beliefs and judgments). It can seem obvious: the notion of ‘processing’ comes from computing, and we all know that a microprocessor (which is a sort of central brain in the computer, as a techie member of staff from a computer shop once explained it to me) does, indeed, do this sort of thing and in a highly non-trivial way. Kant did not know about computers, of course, but he knew perfectly well (unlike Aristotle, who thought that it was the heart, and not the brain, that is primarily involved here – more on that later) that the physical brain was instrumental in turning sensory input into intelligible, self-conscious experience of a spatiotemporally unified world. Indeed, we all know that if you remove (or just severely damage) a person’s brain, then their beliefs will suffer, even if they are completely bombarded with light and sound waves from the outside.
Is the MCT then so obvious that it does not even need an acronym to itself? Well, no, if only for historical reasons. You will recall, from 1.4.2, that I introduced Jonathan Bennett (another distinguished Kant commentator). His own interpretation of Kant involves what he calls the ‘sensory-intellectual continuum’, a sliding scale in which sensory images (which the empiricists tend to like) appear at one end, and in which intellectual thoughts (which the rationalists tend to like) appear at the other. Recall (from 1.2.9) that the crucial notion of an idea (as understood by Locke, namely as whatever is immediately present to the mind) is somewhat vague, inasmuch as it could be an after-image or an abstract concept or much in between. This indicates a sophisticated continuum whose end-points are surprisingly far apart.
According to Bennett, this is just hopelessly wrong. The all-embracing notion of an ‘idea’ involves a ghastly confusion between the sensory and the intellectual. Thoughts and perceptual sensations are not at opposite ends of a continuum, any more than are chalk and cheese. They are wholly different kinds of thing altogether. Moreover, according again to Bennett, the first person to realize this was Kant. On Kant’s view, ‘Sensibility’ (i.e., sentience, or the capacity to receive data from outside) and the ‘Understanding’ (i.e., which is our capacity to understand and interpret these raw sensory data –what he calls ‘intuitions’) are completely different faculties of the mind. The former is passive, whereas the latter is active. Genuine self-conscious experience only arises when both faculties are in play, a point elegantly summarized in Kant’s famous remark that ‘concepts without intuitions are empty, and intuitions without concepts are blind’.
I shall say much more about this later. For the moment, I shall merely say that a popular interpretation puts Kant as someone who was himself neither a Lockean empiricist nor a Leibnizian rationalist, but rather as someone who united empiricism and rationalism into a coherent whole. I shall defer an examination of the extent to which this interpretation is either accurate or helpful, but merely declare, at present, that Bennett’s talk of a ‘sensory–intellectual continuum’ (and its erroneous underlying assumptions) is very useful here.
Bennett’s own view is that TI is true and importantly so, but he defines it as to be a species of ‘phenomenalism’, which (in this context) is a doctrine of how to reduce matter to mind – something like linguistic behaviourism in reverse (see 1.4.3 for a very brief sketch of what this is). This is eccentric, however, though I shall discuss it more thoroughly later on.
Kant’s own definition of TI is that it is the (according to Kant, true) doctrine that we have no knowledge of things as they are in themselves, but only of things as they appear to us. The tree in the quad, for example (not Kant’s), is just a ‘representation’, not a thing-in-itself. It is, however, capable of existing even when unperceived, since it exists in a different part of space from us. If you wonder how all that can be true at once, we are assured that it can be because space (and time) are also not things-in-themselves, but our a priori forms of intuition.
I think that most people, when first confronted with all this, develop mixed feelings. It looks seriously weird, but at the same time a bit obvious. We are (in a sense) trapped within our own particular sensory apparatus (eyes and ears, as opposed to a bat’s echolocation, for example), not to mention our social, cultural and historical perspective. That is the obvious bit. So: it (apparently) follows that we can hardly be expected to see things as an independent, neutral and perfectly rational God ‘sees’ them (for want of a better word). That seems to follow. However, the thought that we somehow (if only partially) ‘construct’ (or ‘condition’) the reality that we really do know about sounds seriously weird.
True, we are nowadays quite used to hearing people haughtily dismiss many things (the family, gender, money, morality, even society itself) as ‘mere’ constructs. But (we add, putting our collective foot down), to be told that the tree in the quad (which we have perhaps just tried to climb, and unsuccessfully) is also a mere construct sounds just like rubbish – and potentially dangerous rubbish at that. Declare, as some will, that Orwell’s 1984 ‘Party’ controls all of reality, and that ‘objective truth’ is ‘merely’ a childish illusion, and you will get what I mean (recall what I said in 1.2.7).
(If you don’t, then talk not of ‘the tree in the quad’ but of ‘the rat in the sewer-pipe’, now about to be strapped to your face by the anti-hero, O’Brien.)
Drama aside, the idea that reality is mind-dependent is obviously politically dangerous, and the first person to formulate – and, alas, endorse – this same dangerous idea was Kant. (Name your alternative candidate, if you disagree with me here. Incidentally, in fairness to all concerned, I don’t think that Orwell ever read Kant; and I have no idea of what he would have made of him if he had. Probably, not a lot.)
But what has all this to do with this section’s title, ‘Assertion and assertiveness’, I hear you ask, impatiently? The title of another even more recent book (2001) on TI, by the living philosopher, Rae Langton, gives us a clue: it is Kantian humility: our ignorance of things in themselves. Langton declares that the word ‘idealism’ is misplaced when talking about Kantian epistemology. ‘Humility’ is much better, since Kant’s main reply to Leibniz (or rather, to his successors) is that they (and traditional metaphysicians, in general) seriously overreach themselves. They are absurdly optimistic about how much they can know, and therefore need a good dose of humility, in a fairly routine sense of the word. Indeed, even when we talk about ordinary empirical science (and ignore the high falutin’ a priori metaphysical speculation, at least for the moment) we need to be humble for (according to Langton), we can only know of how the things studied affect us, and not their intrinsic qualities (I shall talk more about this distinction later). Mind-dependence is a red herring, for Langton; rather, the retreat in our epistemological ambitions is a matter of acquiring a certain kind of humility. That is all.
Langton is also an expert on feminist philosophy and the ways in which language can be used to oppress and demean women, and you might wonder if her interests in Kant and women’s empowerment are connected. My feeling is that they are, though I hasten to add that I do not know if my understanding of Langtonian humility would be shared by Langton herself. But the connections seem to be me to be logical, and not wholly capricious (though there is, indeed, an element of caprice as well, something that is unlikely to surprise my readers). What are they?
To begin with, the words ‘humble’ and ‘humility’ have largely negative connotations nowadays, particularly among people who value what therapists call ‘assertiveness’ (in general) and among feminists (in particular). Too often, we are told to curtail our ambitions, to ‘know our place’ in the grand scheme of things, to not get above ourselves, to ‘stay in our lane’ (as motorway drivers put it), and so forth. It is not just women who have this problem, of course, and systematic discrimination can happen to all sorts of disadvantaged groups; but I shall focus on gender discrimination if only because feminist thinkers have had the most to say on these issues, and for longer.
Now, how does a recently humbled metaphysician talk? How is she to avoid the hopeless ‘antinomies’ (see 1.3.7) that seem to arise whenever she makes an earnest (though inevitably misplaced) attempt to understand what the Cosmos is really like in itself? Well, one obvious answer is to tone down how she says what she says, and to prefix her brash assertions with ‘IMHO’ – and often. Look again at D3 from the previous section, and contrast it with the quarrelsome D2 which it superseded.
What goes for sartorialics can also go for traditional 17th and 18th century metaphysics, we declare firmly. Toujours la politesse, as the Prussians need to learn to put it. (I warned you about the caprice.)
What I am saying is that, when A declares (confidently) that the universe is unbounded and B (equally confidently) declares that it is actually finite (although a lot bigger than you might think), and we produce a D2-style dialogue, we can Kantianize the squabble (which tends to echo ominously around the dusty university seminar rooms, even shaking the chandeliers a little) by injecting a bit of Langtonian humility into the proceedings. In short, do not say ‘The universe is unbounded’: say, instead, ‘It appears to me that the universe is unbounded’. You might, of course, get the silly – and rather tiresome – response, ‘And just how does this apparent-universe relate to the real-universe?’, but we may dismiss this as merely a wilful misunderstanding of how ordinary language works. There is only one universe (the ‘uni’ prefix is something of a giveaway here), but many different attitudes that we may have towards it.
Reference to chandeliers will no doubt trigger memories from 1.1.10, but I shall press on regardless. How, you might ask, nervously? Well, for a start, we can collectivize in a way which Kant did not make explicit (he said extraordinarily little about the problem of other minds, as we now call it), and replace ‘It appears to me that the universe is unbounded’ with ‘It appears to us that the universe is unbounded’. There is safety in numbers, and the powerless have learnt the hard way (over the centuries) that if you abandon solidarity in favour of a rugged individualism, you are liable just to get picked off one by one. In the case of the women’s movement, this has led to a conception of sisterhood that famously does not call for a formal social organization with an agreed agenda and supporting ideology, but something at once weaker and yet more powerful – and in a subtle and elusive way that feminist thinkers make it their business to investigate.
However, even sisters can be oppressive, and the single individual – who is the only real seat of consciousness (and who is therefore the only entity that can genuinely feel pain and pleasure) – remains fundamental, I think; so, we have to ask, again, how to find an appropriate level of assertiveness that is both effective and non-aggressive. Now, therapists – when teaching assertiveness – often talk of the broken record technique. Much has been written about it (see the Wikipedia article on ‘Assertiveness’, for example), if you need to be reminded of what it is (the term is now officially used in medical dictionaries, by the way). The gist of it is simply to repeat your request over and over again, without giving any further reasons as to why you are entitled to get your own way in this matter – but in a very low-key, non-threatening way (with soft voice, lowered gaze, and maybe even submissive body language).
It is a very effective technique – and one which deserves to be treated with great respect. However, it clearly has its limitations, and the assertive person will need to acquire some other relevant social skills as well if she or he is to flourish in any normal environment.
With this thought in mind, let us return to the question of how the speculative metaphysician (as he is called by Kant, and also by many others) is to tone down his wild speculations and move away from dogmatic excess to reach a critical balance (I use the word ‘balance’ advisedly – retreat too far, and he will topple over backwards into the equal and opposite error, namely scepticism). This is clearly the right approach to take here if metaphysics is ever to become a science (i.e., a legitimate – although a priori – organized body of knowledge within which genuine progress can be made). This is how Kant himself sees the matter, and it is surely admirable. Kant is at his best when he identifies his Critical Philosophy, which he expounded and defended during what we now call his ‘critical period’ (roughly, between 1781 and 1790), as a kind of golden mean between two extremes, namely scepticism (deficit) and dogmatism (excess).
I should qualify the above by saying that the phrase ‘golden mean’ is normally associated, not with Kant, but with Aristotle, along with the idea that a virtue (moral, epistemic, or whatever) is to be understood as something in between two equal and opposite vices. The claim that assertiveness is a sort of Aristotelian virtue situated between timidity and aggressiveness is my own idea, though inspired not so much by Aristotle but by the women’s movement (which, one feels, would have been viewed with only moderate enthusiasm by the teacher of Alexander the Great). Whether this is right or not, I am a little unsure. The problem is that it may become hard to distinguish it (assertiveness) from the cardinal moral virtue of courage, which is situated between cowardice and recklessness – unless the surrounding vices can be analysed more carefully than virtue-theorists have been able to do so far (as far as I am aware).
Still, Aristotle – the metaphysician – can be regarded as providing a welcome antidote to the wilder excesses of Plato, just as Kant provided a parallel antidote to Leibniz. Experts will recognize now another more recent philosophical influence here, especially if we label Aristotle and Kant as ‘descriptive metaphysicians’ (who are content simply to explore our actual, preexisting conceptual scheme) and Plato and Leibniz as ‘revisionary metaphysicians’ (who aim to create a new and better one). The notion of a ‘conceptual scheme’ is not innocent, however, and it is a little unclear where I wish to situate my own philosophy, as expounded in this book (I think, most likely, among the revisionists, but with many qualifications).
For the benefit of non-experts, I should say that the ‘modern philosophical’ influencer here is none other than the feature placer himself, though the descriptive/revisionary distinction was introduced, not in 1966, but in his earlier 1959 work (see 1.4.2 to see the connection).
1.4.10 On assertion and assertiveness 2: the phenomenologists
So, fortified by these background aperçus, we must return – yet again – to our language games and the social situatedness that my (possibly) revisionary metaphysics is out to bring to our collective philosophical attention.
It has been observed by Russell (among others) that the intolerance of dissent tends to be inversely proportional to the evidence that would justify such intolerance. Mathematicians do not come to blows over whether a given number is prime or not. It is simply a matter of proof, not conjecture, and someone who flatly ignored the proof and carried on asserting proven falsehoods, would simply be pitied (and ignored), not lynched in an act of collective moral outrage.
However, when theologians debate whether the Son of God is the same substance as that of God the Father (homoousia, in Greek), or merely of like substance (homoiousia, in Greek), you get the Thirty Years War, a horrifically bloody European conflict – the aftermath of which Leibniz had to help to clean up (he was, among many other things, a diplomat employed by the Hanoverian court, as well as a theorist).
The sheer pettiness of one of the chief origins of the dispute – the letter ‘i’, which (it will be remembered) is called ‘iota’ in the Greek alphabet, seems to be the only difference between the two theological positions – was a source of outrage that stimulated some of the best thinkers of the Enlightenment. (The well-known phrase, ‘It does not make an iota of difference’, originated with this fine metaphysical distinction.) And it is not for nothing that the long 18th century is often called the ‘Age of Reason’. Although it coincided with an age of massively increasing human knowledge, it primarily came about as a result of a highly emotional reaction to an earlier age, when people felt that they just ‘knew’ – and for certain – that they were right.
This reaction against unreason took several forms. Voltaire (1694–1778) was courageous in an obvious sense of the word – he mocked the political and religious authorities of his time, and his many pamphlets remain a masterpiece of satire. His most famous work, Candide, contains the celebrated character, Dr Pangloss, an amiable (if deluded) figure who was always willing to explain how every atrocity was really for the best when you thought about it properly. He is, of course, a thinly disguised Leibniz, whose belief that this is the best of all possible worlds remains a significant obstacle to taking his modal metaphysics seriously.
And Leibniz himself? Unlike his French nemesis, he was not a scathing observer from the sidelines, but a man of action: a naturally cautious (and sometimes unscrupulous) diplomat and envoy who actively did his best to restrain the more enthusiastic players to be found in the political and military milieu of the Germany of the time (then, a somewhat disorganized set of independent statelets – unity was not to happen until Bismarck made it happen in 1871). As mentioned earlier, many of his (Leibniz’s) most important writings were not published in his lifetime, and we had to wait for several centuries for later thinkers (such as Russell) to uncover painstakingly what he really thought. One fantasy Leibniz held is that we must invent a logically perfect language (he called it a characteristica universalis), one whose vocabulary and grammatical structure were so visibly straightforward that error and confusion simply could not be articulated in the first place. Equipped with this new form of words, we could (as Leibniz famously put it), ‘replace disputation with calculation’. However, we had to wait for this for another couple of centuries: specifically, for Frege’s Begriffsschrift and the beginnings of modern mathematical logic (and computing theory), as already noted in 1.3.3.
The debate between Voltaire and Leibniz can be continued, but I shall not do so until later. The question of whether we have authentically free will (which is what underlies the standard response to the ‘problem of evil’, as theologians call it) is as deep as it gets, and it also provides a continuity factor linking Kant’s first and second Critiques. Much of Voltaire’s lampooning of Leibniz is, therefore, just unfair.
And if you, dear cynical reader, think that not even I can rescue the monadologist-in-chief from being seriously embarrassed by the non-human-action which was the Great Lisbon Earthquake of 1755, you will be sadly mistaken. However, you will just have to be patient, and wait until I discuss how best to interpret modern quantum physics before you see how I do it.
But to return – yet again – to the matter in hand, namely how disputes about issues where there is irresoluble disagreement should be conducted.
Now, one thing is clear and that is that the situation during the Thirty Years War would not have been much improved just because the marauding soldiers minded their Ps and Qs whilst engaging in their acts of wanton destruction. No amount of preparatory bowing and scraping (or IMHO-ing, as we might put it) is going to ameliorate the damage that is to come. On the contrary, the average Hausfrau of the time would have been be more than happy if both she and the invading bravos were to ‘stay in their lanes’ rather than have them trash her house and contents (which, it should be remembered, will probably include other members of her family). A real loss of assertiveness might well be in order, humiliating though it might be.
In short (and to return to the polite – if equally pointless – debates in sartorialics and metaphysics), it is not enough to reduce what Frege called the ‘tone’ of the utterances in question. It is (after all) what you are saying, and not just how you are saying it, that is causing the trouble. The content must also be reduced, it seems, for what other aspects of these utterances are open to the relevant sort of diminution than content and tone? Yet a reduction of content seems to lead us back to TI, to a retreat inwards and thus an attempted-talk about a seeming-world (or World of Appearances) which is sort-of mental and also sort-of physical – something that cannot survive careful scrutiny.
However, there is a way out. Frege famously has a third ingredient that makes up the significance of an utterance, namely what he calls ‘force’. I have argued in my other book (Aiming at Truth, 2007 – henceforth, AAT), and in some detail, that scepticism in general should be prevented by systematically reducing the assertoric force with which our ordinary judgments about the world are made. Our beliefs should likewise be reduced in a similar way. I shall not rehearse in detail the arguments of AAT, but shall give some examples of what I mean by ‘force’ and of what a reduction of force might mean.
Thus, consider first the case of a man on a theatrical stage who shouts loudly at the assembled audience the celebrated words, ‘The house is on fire!’ What is he up to, we ask? What message does he want to convey? Now, the strongest kind of speech-act we could have here is one where the shouted words and accompanying gestures should be taken at face value. There is no pretence, and assertoric force is on the maximum setting. The man is not acting, the sentence is not part of the script, and the Fire Brigade has already been called and is on its way.
But, of course, it could be that we have instead a performance of a very avant-garde play, one which challenges the conventions, subverts the form and so forth; and the sentence in question really is a part of the script. If this is the case, is it true to say that our actor asserted that the house is on fire? I say not, and he should not be held to account for having said so. Likewise, if the script also requires him to say ‘I set the house on fire’, this should not be used in court as a confession of arson (or of a conspiracy to do so). It simply wasn’t. He did not believe that the house is on fire; he merely pretended that the house is on fire. He may have said that the house is on fire, in some very broad sense of ‘said’. But he did not assert it: he only ‘pretend-asserted’ it (for want of a better choice of words).
Now, this may seem like a rather isolated case, and we can all surely tell whether or not we are at the theatre. The division between the stage and the auditorium is traditionally the ‘proscenium arch’, as it is called. But some theatres do not have an obvious stage area, and some plays deliberately subvert the proscenium. This can be done without alarming the paying audience by having a play within a play – or, more accurately, a play of a play. A classic example of such a second-order play (as the logicians might call it) is Tom Stoppard’s The Real Inspector Hound (1962). But some plays go further, and involve the audience directly (though this often reflects the whim of the director rather than the playwright). It can be disconcerting to find, whilst dozing gently in the stalls, that a spotlight suddenly falls on you and a member of the cast on stage starts asking you to do things (such as answer embarrassing questions), and much to the amusement of the rest of the audience. Nevertheless, nobody can seriously rule out such theatrical practices on the grounds that they make no sense. They may be absurdist, but they are not absurd in the logician’s sense. When in such a social environment, you may well find that your beliefs and assertions are having their force reduced in a way that you cannot easily control, and yielding thoughts and speech-acts that do not already have a name.
Indeed, the very idea of an expressing game seems to involve this kind of thespian manipulation. It needs to be emphasized (and will be, repeatedly) that the expressing game (as I understand it) is not a spectator sport. To adjudicate the D2 and D3 debates, you must ‘descend into the arena’, as they say, and fight your own corner (having firstly checked your own shirt-colour, needless to say).
Again, we may feel that anything involving D2 and D3 is too silly to be of any genuine interest, but such feelings are misplaced. We might, for example, treat the dispute as an allegory for a kind of erotic dance, a pas de deux where competing fashion-statements are flaunted. Alternatively, the shirt colours could represent the uniforms of rival armies determined to destroy each other utterly. Less violently, they could be the colours of opposing football teams, each playing simply to win (and with no thought of a compromise or any other kind of rapprochement).
The whole problem about the illicit subject–object distinction, in my somewhat idiosyncratic sense, is that we find ourselves regarding the whole world (including our own bodily behaviour – and, indeed, our own mental activity) as something ‘out there’, something that we are not a part of. This is the route to massive illusion, for such detachment will not survive careful reflection.
How does phenomenology come into it? In 1.3.3, I discussed Sartre’s famous example of the voyeur who suddenly becomes conscious that he himself is also being watched, but its significance may not have been readily apparent. Still less is it clear that it is closely connected to what I call MCT, and how one can be fooled into thinking that it implies TI (see 1.4.9 for the definitions of these terms). Well, the point is this. If you do not yet accept the great Kantian insight which is MCT – namely that our cognitive states are secured by a combination of both internal as well as external factors – then you will be logically forced to think that any sudden change in the cognitive states must be caused by a change in the external factors alone. (It takes a sophisticated kind of self-consciousness to realize even that there are any such things as these internal factors, let alone that they shape the very appearances themselves.) This is just wrong, as a few simple examples will show.
Thus, imagine (once again) that you are gazing at that famous tree in quad and wondering (as you do) whether it disappears every time you blink. You probably decide that it does not; and your reason is that closing your eyes is a change in you and not in the tree itself. To put it another way, it is an internal factor and not an external one. Likewise, suppose that you are not the extreme paralytic that I introduced in 1.2.10, but can walk – and you do so towards the tree. In response, the tree will cast a larger and larger shadow on your retinas; do you conclude that the tree must (for some unfathomable reason of its own) always get larger whenever it sees you coming? Again, no, we say impatiently. Obviously, the change in the tree’s appearance is to do with us and not the tree. And if we then ask the question of which tree we are talking about, the tree-as-it-appears-to-us or the tree-as-it-is-in-itself, we would most likely be met with a look of total exasperation. Unless, you are from an isolated tribe who live entirely within a dense forest, you will not find it hard to think that trees that look very small probably do so because they are far away, and not because a magician has shrunk them. And a tree is just a tree.
So far, so good. But now you notice that the tree has white blossom, and you consequently scowl furiously at the head gardener who had promised to plant only red-blossomed trees in this particular quad. The head gardener is alarmed, but what’s to do? Well, it is your experience of looking at this particular arboreal specimen that is unpleasant and therefore needs to change, and there seem to be just two ways of doing this. Either alter the tree in some way, or else alter you yourself.
Now, altering the tree (or the tree-in-itself) is a drastic manoeuvre, and slopping red paint all over the blossoms a rather hazardous and labour-intensive procedure; so (unless you are the sociopathic Queen of Hearts from Alice’s Adventures in Wonderland), you are unlikely to go down that path. More probably, you will just turn that knob on the back of your head to a new setting – I mean (of course) the one which electrically stimulates your visual cortex in an appropriate way. This may seem a little bit drastic also, of course, but it is a staple of philosophy that colours are not ‘in’ the objects themselves, but in the mind. Moreover, the famous untestable hypothesis, namely that I see green where you see red (and vice versa), is one that we have all wondered about (the idea goes back to the Ancient Greeks, and is probably as old as imaginative thought itself. Since you have many white-blossomed trees, but only one visual cortex, this second method is probably better.
What goes for colours also goes for tastes. Thus, consider the well-known problem: how do you make the perfect cup of coffee? Is it in the original beans, the method of grinding them, the hardness (or hotness) of the water? A scientifically minded person would probably agree that it would have to be something like that. But we all know that the scientists are just wrong here. Although we instinctively suppose that two chemically indistinguishable substances must taste exactly the same, we know that we get a better experience if the coffee is drunk from our favourite cup as opposed to the vibrant blue plastic one that they give you at the greasy spoon down the road. Moreover, psychologists will agree that we cannot seriously separate flavour from the pleasure or pain that accompanies it (much, much more on that later).
To take things further still, the coffee will taste better when served by a genial and well-loved senior barista who routinely compliments you on how young and fit you look, than by his sulky daughter (now banished to the greasy spoon I mentioned) who informs you that she has just murdered your wife in a fit of jealous rage.
More seriously, a responsible restaurateur, who knows that he will go bankrupt soon if he cannot attract a larger and more discerning clientele, will have to decide where to spend such money as he has. Does he hire a better chef who will inevitably demand a higher salary, and may well be somewhat lacking in social skills? Or does he redecorate the dining area, and hire better front-of-house staff, namely agile waitresses and waiters with diplomatic manners but little knowledge of the actual mechanics of cooking? Kitchen staff and front-of-house staff tend to be very different sorts of people with very different skills and somewhat different working environments (though a good working relationship between kitchen and front-of-house is also essential if the restaurant is to flourish – obviously). So, choosing how to invest your limited capital requires choosing between chalk and cheese (figuratively speaking), and problems about rational choice that get the theorists in a right tangle.
So, if you start asking metaphysical questions about who really created the flavour, you may find that the answer is much more complicated than you have probably been led to think. Indeed, if you ask even at our (supposedly) best seats of learning, namely the Oxbridge colleges, then you may find the Fellows and Tutors saying one thing and the chefs saying another. We noted this divergence in 1.2.10.
Anyway, the point is that what goes on in your mind is shaped by a combination of internal and external factors. You might rephrase that by saying that our beliefs are conditioned by both internal structure and the external reality in which we are situated. This is what I call MCT, the major insight that Kant got right. Where Kant (and others) go horribly wrong is in concluding that the internal factors shape the external ones. This gives the absurd doctrine of TI, that the ordinary world in space and time that we inhabit is belief-shaped, that it itself is (partly or wholly) conditioned by the knowing mind. Yet this inference is absurd. The whole point of separating the internal and external influences on our beliefs is precisely to make it unnecessary to suppose that the former has any relevant effect on the latter. What we have, rather, are two separate influences on a single outcome.
What causes this mistaken inference is a lack of direct awareness of our internal structure. When our self-conscious experiences change due to a change in internal factor, we assume it must be because of a change in an external factor – and then try to figure out just what sort of external factor that could be. If the colour-inverting knob on the back of your head is pressed (you didn’t know you had one), you conclude that someone has been slopping paint all over the blossoms. If the barista’s daughter alters your mood, you conclude that she has poisoned your coffee as well as murdered your wife. Theatrical designers know all about this sort of thing: for example, they know about subtle lighting effects; props with distorted shapes that present the illusion of distance; and actors with heavy makeup and unnaturally loud and distinct voices.
At a more down-to-earth level, certain kinds of women have been known to smear what they call ‘war paint’ over their faces to induce the effect that they would present had the men looking at them been in a better frame of mind. It is hard to believe that any man could be fooled by this, but occasionally they are: and we should blame Kant.
1.4.11 A Critique of Pure Practical Reason: thick concepts
Moral to the above: keep internal and external causal influences apart, both in the world you inhabit and in your analysis of it.
Now, as mentioned, it is primarily with ethical thought that (what I call) expressing games are typically concerned, where it is not so much the psychological faculties of cognition as those of affect and conation (as psychologists call them) that are stimulated and which lead us to conclude that there is something ‘out there’ which is responsible for the way in which our minds are working. If, for example, we see Gertie kicking a dog for no very good reason, we tend to get angry; and we, in consequence, say – not just ‘Gertie causes pain unnecessarily’ – but ‘Gertie is cruel’. The word ‘cruel’ has essentially negative connotations. And it combines descriptive and evaluative elements in a way that, famously, is not easily separated. You might think that ‘Gertie is cruel’ just means ‘Gertie causes pain unnecessarily – and Eurghh! to Gertie’. This is a simple conjunctive sentence with pure fact on one side of the ‘and’, and pure evaluation on the other; but there are numerous technical objections to treating these sentences as being exact logical equivalents. Concepts such as cruel are known to experts as thick ethical concepts, because the purely evaluative element has a factual thickening agent surrounding it which guides its application. Because we can agree on whether or not someone is causing pain to another creature (ignore the problem about zombies, for moment), we can agree on whether they are cruel or not. But, we notoriously cannot come to a principled general agreement on whether and when to say ‘Eurghh!’. This is what leads many to think that moral values are, at best, ‘merely’ subjective; and, at worst, completely unreal.
You will recall J.L. Mackie and his ‘error’ theory of values. One argument that he gives for saying that there are no objective values is called ‘the argument from relativity’, which points out that there is massive and (apparently) irresoluble disagreement about what to condemn. For example, we Europeans might condemn human sacrifice as cruel, but the Aztecs of ancient Mexico famously did not. Another, even more influential argument is what he calls ‘the argument from queerness’. Objective values would have to be very queer entities indeed if they were to do all the metaphysical work required of them. It follows, says Mackie, that there aren’t any.
Now, it is noticeable that a certain Scottish philosopher, one who is alleged to have strongly influenced Kant, talks in this context of our sentiments ‘spreading themselves upon objects’ and of ‘gilding and staining the world, and raising up in a manner a new creation’, but this so-called ‘projectivist’ talk should be strongly resisted. It involves exactly the error that Kant was to make in his TI. And these figurative new gildings and stainings, unlike the female war paint we mentioned in 1.4.10, have no outside reality. Just look carefully and dispassionately at a human sacrifice and see if you can see any.
You can guess where this line of thought leads. All moral, ethical – and, indeed, aesthetic – qualities are pseudo-qualities. Value judgments do not describe an evaluative reality; they ‘merely’ express our own sentiments. We can complain that the scare quotes around the word ‘merely’ are out of place, and insist that a fully-rounded human beings must have refined affective and conative qualities as well as cognitive ones. The sneer associated with the word ‘merely’ is gratuitous and unwarranted; and Nietzsche, for example, has been accused of a certain playground-bully silliness in this respect (with some justification), though his writings are complex and lend themselves to more than one interpretation.
Still, it is understandable if this way of looking things provokes outrage. Cruelty and injustice are real; and would remain so if everyone were cruel and unjust, and so failed to condemn them. It is often said that we should not judge other cultures or historical epochs by our own advanced standards, and that (if they are to be judged at all) they should be judged by their own standards, such as they are. Do not judge the slave owners of Ancient Greece (or, indeed, 1850s Mississippi) by the exquisite standards of today’s liberal elites. Nobody’s listening. (Or so say, not only social conservatives – as they style themselves – but also many academic historians and social scientists. Okay, so I’m being provocative here, but you get the picture.)
Obviously, more needs to be said here, but I shall defer that for later. In the meantime, some music. The chapter heading of 1.4 gives the title, and the singer’s name can be guessed if I tell you that, unlike his misspelt namesake discussed in 1.4.9, he is neither sighted, nor white nor female. The film associated with the song gives an admirable depiction of upstate Mississippi in the early 1960s (though it was filmed much further north). If you think that it is too cuddly, and too old-fashioned to give a good feeling about what is going on politically in that part of the world, then I recommend instead the song called ‘Innuendo’ by a band whose name suggests that its inclinations might be royalist or gay (or both). It (the song) always reminds me of the Spanish street demonstrations of 1981 which protested against the attempted military coup d’etat of that year. All this seems to me to be relevant to what is going on in the world, for some reason.
1.5.1 ‘Aubade’ (a.k.a. ‘early morning serenade’)
This time, we’ll start with the music. The ‘Aubade’ (pronounced /oh-bahd/) from Prokofiev’s Romeo and Juliet is not as famous as the ‘Dance of the Knights’, but you have probably heard it before. It is performed in Juliet’s bedroom towards the end of the ballet, after Romeo’s death but before she learns about it. If you prefer something a bit more robust, then the chapter title, as performed by the British band, Vanity Fare, gives a nice but neglected little number from 1969.
So, what’s with? You might wonder why a book which tries to structure itself in a way which mimics Kant’s architectonic should have a fifth chapter, when it is well known that the Rules are that everything just has to be grouped into four (each element subdivided into three) and spatially arranged like the four points of the compass. If you suspect that an ‘architectonic’ is something that your posh (but abstemious) flat-mate has in a large wine glass with a little ice and a slice, you could almost be right. If you feel that adding a double measure of gin to it is rather like adding some content to a very pure form (the distinction between content and form is another of Kant’s dualisms – the former is empirical, and the latter is a priori, by the way), then you are probably also right, though you might have some difficulty proving it. Incidentally, despite his obsessive punctuality and unique writing style, Kant was a generous and convivial host. Each guest at each of his many luncheon parties would be given a pint of claret – presumably to drink, and not just to admire. (I don’t know what Kant had against burgundy, but that is just a detail.)
His university was in the ancient Hanseatic city of Königsberg (literally, ‘king’s mountain’), which also boasted of having seven bridges made famous by the Swiss mathematician (and founder of what we now call ‘network topology’), Leonhard Euler (1707–1783). His name is pronounced /oiler/, by the way. There have been some major changes to the city and its environs since then. The river Pregel has altered its course, and the seven bridges (and islands that they linked) are long gone. The Soviet Union did better than Nazi Germany did in the Second World War, and the city is now in a Russian enclave. The city itself – and the enclave – are named after one of the few of Stalin’s henchmen to have died of natural causes, as yet another British commentator on Kant happily puts it. As he (the commentator) goes on to say, at least the starry firmament above and the moral law within are two Kant-inspired items that the good citizens of Kaliningrad can still gaze on with unqualified awe, and without much hindrance.
At least, I think so. The good citizens of Lithuania are understandably a little nervous of what goes on around those parts, and if the Third World War were to start anywhere, it could just as well be there – and regardless of the wishes of the author of Perpetual Peace.
But why all this so early in the morning, I hear you ask impatiently? Well, the fact is that I did not manage to quite finish everything that I wanted to say the previous night (i.e., earlier today, in the very, very early morning, long before sunrise). Mention of Jiggly Juliet, as I tend to call her in my own private language, reminds me of another rather dangerous young lady that I have mentioned previously, together with a drawling gentleman with whom I had a scheduled meeting a little later today. He never could quite ‘get’ the British sense of humour, and although he would often ‘eurghh’ at things (in my somewhat technical sense of the word), his choice of expletive was somewhat different. It sounded like ‘inappropriate’, but I could be wrong here.
I haven’t yet had a chance to say much about moral vocabulary, though I talked a little about ethically thick concepts in 1.4.11. You will recall that the thick concepts, such as cruel and lazy, have a descriptive component which ensures that there is a bit more agreement about how and when to employ them; unlike thin concepts like right and wrong, which are too unspecific to command general agreement as to use. The drawback is that the thick concepts lack generality. To call a cruel person lazy (and vice versa) is just to say something false (save per accidens, as they say).
It would be nice to have a word of condemnation that was both all-encompassing and sufficiently dramatic as to command attention (if not instant obedience) from all who hear it. ‘Inappropriate’ does not really cut the mustard, but at the time of the King James Version (KJV) of the bible, the English language was ‘strong’, as they say. Leviticus (18:22) uses the word ‘abomination’ when condemning male gay sex, and the same word is used to condemn the eating of ‘whatsoever hath no fins nor scales in the waters’ (11:12). American evangelical pastors, who tend to succumb to what is sometimes called ‘the idolatry of scripture’, know exactly why gays are all going to hell. They might also tell this to you whilst munching on their lunchtime prawn mayonnaise sandwich, but maybe even they wouldn’t dare. A ham sandwich is also prohibited, but pig-meat is merely said to be ‘unclean’ (11: 7–8), which falls some way short of being an ‘abomination’. So, a gay-bashing ham-sandwich-muncher might just get away with it, but he should beware the shellfish counter if he wants to avoid the wrath of God.
The Ten Commandments would also rather lose much of their effectiveness if their prohibited practices were merely described as ‘inappropriate’. It is slightly odd that this should be so. The original
Thou shalt have none other Gods but me
contains no evaluative language at all, and sounds (to the initiated) rather like an unverifiable prediction. It works, though, for some reason. However,
It is inappropriate to have none other Gods but me
merely sounds ridiculous. Moreover, the ethicist’s favourite
It is morally wrong to have other Gods but me
sounds indecisive. It suggests that a follow-up clause is on the way, along the lines of
However, practicalities dictate that I have a few extra deities on the side
If you don’t see what I am getting at, try it with
Thou shalt not drink and drive
and you will probably see what I am on about. If this still sounds unacceptably frivolous, try it with
Thou shalt not obey Hitler
Should the actual situation be that your wife and children will be tortured and killed if you do not obey a wicked command from the Führer, then your true obligations are not at all obvious. Kant himself is often accused of espousing an unrealistically harsh and unrealistic sort of ethics (he was certainly strongly inspired by pietism, a very austere movement within Lutheranism, which in turn had a strong influence on American evangelism). To what extent this is fair is a matter of intense debate among Kant scholars. I shall a little more about this when I examine just what Kant might have meant by a ‘categorical imperative’. In the meantime, note that the prefixes, ‘Thou shalt not make it true that’, ‘It would be inappropriate if thou were to make it true that’, ‘It morally ought to be the case that it not be true that’ – and, indeed, ‘God commands that you not make it true that’ – have very different meanings. Note also, in passing, the difference between the latter and ‘God commands that you make it not true that’ (which is slightly stronger, if you think about it). (The word ‘not’ appears because the word ‘none’ in the original First Commandment is negative in meaning, by the way.)
Nowadays, we are terrified of value judgments: we do not know how to make them; and we do not know how to respond to others if they decide to make them within earshot. The spoof-discipline of sartorialics, as expounded in 1.4.8, reveals this. And probably the worst insult of all is, of course, to call someone ‘judgmental’. Many say, without any sense of irony, that it is obviously wrong to be judgmental. They expect to get unqualified agreement when they say this (and they usually do). Even those brave enough to venture out of those conversational circles that we now call ‘echo chambers’ tend to feel this way. And yet to accuse someone of judgmentalism is (obviously) to be judgmental yourself. The paradox is vicious – and note, moreover, that we have not used the word ‘moral’ at all in getting it.
Some conclude from this that we should feel hopelessly guilty about everything (except the fact that we feel guilty, of course), and are, in consequence, easy prey for unscrupulous rogues who know exactly how to engage in what is called moral (or emotional) blackmail. A crazy political relativism is demanded of liberals, that they must always respect the internal affairs of bloodthirsty dictatorships. The yelled command, ‘Just mind your own business!’, is a very effective insult, especially if the noun is qualified by an expletive or two.
We are (I think) slowly getting better at holding our own against this sort of psychological attack. But unless and until the logical and philosophical errors that (partly) underpin our fear of making judgments are addressed and successfully corrected, this recovery will be uncertain. At the time of writing (November 2024), the world looks rather bleak. The villains are removing their smiley masks and showing their true colours – and yet are still winning over the mentally vulnerable. But the better educated are gaining a quiet self-confidence which suggests grounds for optimism. Once you know your enemies, and can see them clearly, then you know better what to do.
1.5.2 A Critique of Judgment: the whys and the wherefores
Kant did not originally intend to write a third Critique. Indeed, he had not even planned to write a second one, since the first was supposed to cover everything, including a good answer to the fundamental question of how pure reason can be practical, i.e., of how simply knowing what is commanded of you should prompt you actually to obey. Actually, if truth be told, he had not even originally intended to write the first Critique. His original research areas were mostly in physics and astronomy. He wrote on the nature of fire, for example, and we still talk about the Kant-Laplace theory of the origin of the solar system (now known as the Nebular Hypothesis, a widely accepted theory of how all star-systems were originally formed). However, his interests were always fundamentally abstract and theoretical, and he soon came to realize that even physics required non-empirical assumptions that needed vindication.
In 1781, at the beginning of what we now call his critical period, he thought that what we now call ‘aesthetics’ did not require much attention. Kant spoke of ‘judgments of taste’, and did not approve of the modern use of the word ‘aesthetic’ (he was to have changed his mind before 1790, when he wrote the third Critique). Matters of taste seemed to be too subjective and too concerned with pure pleasure as opposed to reasoned argument to need much philosophical consideration.
But by 1790, Kant thought that even aesthetic judgments had an a priori aspect which required critical attention. Like many thinkers of the time, he divided aesthetic judgments into those of the beautiful and those of the sublime (about which more later), and we can now see a connection between the austerely scientific Kant and (for example) the Lakeland poets. Both attempted to say the unsayable, and both found fascinating the dread provoked when ordinary human understanding and passions are stretched beyond their natural limits. The Romantic movement had well and truly arrived, and the tensions between it and the Enlightenment values that provoked it were now approaching full stretch.
What has puzzled legions of scholars is that the third Critique is divided into roughly equal parts, the first concerning aesthetics, and the second concerning what philosophers call ‘teleology’. The Greek word telos means something like ‘end’ or ‘purpose’, but it is hard to translate exactly. We talk of ‘television’ and ‘telepathy’, and here the notion of distance is emphasized. A teleological explanation is one that makes use of what Aristotle called a ‘final cause’, and is one which explains the effect in terms of the purpose for which it was created. For example, to explain why I broke the egg, I might say that it was in order to make an omelette. The omelette (final cause) came after the breakage, unlike the ‘efficient cause’ (the bashing of the egg against the rim of the bowl) which has to come before (or at the same time as) the effect.
But perhaps the most famous use of the word ‘teleological’ is in the phrase, ‘the teleological argument for the existence of God’, which is more commonly known as ‘the argument from design’. Kant wrote long before people knew of modern theories of evolution, of course, and like most such pre-Darwinian thinkers, had some difficulty in explaining how something as complex and beautifully designed as a human being could have come about without something like divine intervention.
Kant’s official view, which was made most explicit in the ‘Antinomy of Pure Reason’ chapter in the first Critique (see 1.3.7, above), that it is impossible to prove that God exists, and that it is equally impossible to prove that God does not exist. Reason is powerless to adjudicate the debate here. Needless to say, the religious authorities of the rather conservative Prussian state of the time were a bit uneasy about Kant, but they were largely placated by his observation that faith also has a kind of legitimacy. Indeed, he once went so far as to say that the real purpose of the first Critique was precisely to limit the realm of pure reason. By this, I do not mean that that he wanted simply to set limits to the knowable; but, rather, that he wanted actually to downgrade the ambitions of Science so as to make room for Faith.
Kant was no intellectual coward: his motto remained ‘Sapere aude!’ – ‘Dare to know!’, the battle-cry of intellectual anti-authoritarianism. He also repeatedly landed himself in trouble with the religious authorities – though not as thoroughly as did Giordano Bruno (1548–1600), who was burnt at the stake for espousing the Kantian view that, not only is the Earth no different from the other planets, but also that the sun is just another star in the firmament. True, Kant was rather hesitant to draw even the most obvious irreligious conclusions from what he thought, and some irreverent commentators have suggested that he did this simply to avoid upsetting his faithful and elderly manservant, Lampe (/lamp-uh/). But we are all allowed to be a bit confused when God is mentioned, and it must be remembered that Kant changed his mind a lot during his working life.
When you are talking about the ultimate purpose of life, it is understandable that you will talk about religious matters, even if it is just to deplore the fact that there is no God. Religion and teleology are intimately linked, but the latter goes even further. We can sensibly ask what the point of God Himself is, what his own line-manager (the elusive super-God) thinks of him, and so forth up an endless chain of command. I might ask, on my death bed, what on earth was the point of all that love, laughter, suffering, and so forth. And if I am suddenly confronted with an unexpected but imminent demise, I might say, ‘Oh no, not like this!’ The hymn urges us to ‘live this day as if thy last’, but few of us do so (or could get much done if we did – say ‘minute’ instead of ‘day’, if you don’t already get the point). As I say, this is natural.
My point is that you do not to be a super-intellectual for your mind to wander upwards into these rarefied areas of thought and feeling; and you do not have to wait for dramatic circumstances for your mind to do this. Even the least reflective, least well-educated person, quietly mulling over a pint at his local pub, might say to his companion (or just to the world at large), ‘What’s it all about, then?’ He does not expect an answer, just a nod of general agreement.
A yet further mystery with the Third Critique is its title. It is not a Critique of Ultimate Mysteries, but the Critique of Judgment, and the faculty of judgment is not exotic, but as basic as it comes. The word ‘judgment’, it will be recalled from 1.2.5, is the term used in categorizing the difference between the analytic and the synthetic, and is utterly basic inasmuch as it is introduced at the very beginning of the first Critique. I did suggest, however, albeit teasingly, that the distinction between the act of judging and the item judged is perhaps significant. What we have seen so far about the strange concept of ‘judgmentalism’ rather bears this out.
And yet another encore? Kant was getting old by 1790, and his powers were visibly beginning to decline. Others have stepped in, however, and in the first decade of this new millennium, one Michael Gregorio presents us with the Critique of Criminal Reason, Kant’s last and most vital investigation. The dark streets of Königsberg are haunted by a fearful series of crimes, and an investigating magistrate is brought in from outside to solve the mysteries, and thus bring the perpetrator to justice. He naturally seeks the help of the city’s elderly, but most distinguished thinker. Could the serial killer turn out to be Kant himself? Or could it even turn out to be old Lampe, determined to the last to protect his master from obscure evil?
To be quite honest, I can’t even remember myself, and I don’t believe in spoiling plots for other readers; so, I shall leave it there.
1.5.3 The Black Prince
The Overture is now coming to an end, and the main work itself is about to begin. A logical problem emerges, however, and this concerns (once again) names. Can a given name apply both to a complex entity and to a proper part of it? The ancient problem of the cat which loses its tail is a case in point. The cat is normally called ‘Tibbles’ and it loses its tail in an accident, but nevertheless survives. Now, suppose that this name is given not just to Tibbles herself but to her tail-complement. (A tail-complement, by the way, is that part of a cat which is everything-except-the-tail). Can we say, before the accident, that Tibbles has a tail? Well, yes and no, cometh the weird response.
The problem is that the cat has a tail (as a part), but its tail-complement does not. We conclude that the tail-complement, for logically overpowering reasons, just cannot be called ‘Tibbles’. What should we call it, then? Common sense says: nothing at all! A tail-complement is a weird sort of entity, and scarcely deserves any kind of name. Logicians, however, will not have this. They even allow every real number to have a name to itself, despite the fact that there are uncountably many of them (in a sense which we shall late make precise). So, the tail-complement is (usually) given the rather abrupt name of ‘Tib’.
Okay, we say; now what? Well, the problem is that, after the accident, the tailless Tibbles and the tail-complement Tib occupy exactly the same portion of space. Mindful of the obvious truth that two objects cannot occupy the same place at the same time (if you doubt this truth, just drive your car at speed into the path of oncoming traffic), we must conclude that Tibbles and Tib are really the same after all. Oh dear.
This problem is ancient, and it illustrates a very important debate about ordinary physical objects in space and time: do they occupy time in much the same way as they occupy space? Do they completely and wholly exist at every time at which they exist at all? Or do they have temporal parts as well as spatial parts? How do space and time themselves fit together? The Tibbles/Tib story provides a useful gateway to some very profound and central areas of metaphysics (including, would you believe, modal metaphysics), and we shall say much about these issues shortly.
But what if the item in question is not a cat but a book? I think that it depends, and that ambiguities here can yield interesting effects. An example of what I mean is the 1973 book whose title is the same as this section heading. It is its author’s best novel, in my opinion, and is a murder mystery with many unusual, philosophically intense add-ons. It has an unusual structure, consisting of a central first-person narrative bookended by fictional forewords and postscripts written by other characters in the central story.
And the author? She was a distinguished 20th century British philosopher based at Oxford as well as a very well-known novelist. She is known in academic circles primarily for her work in ethics, and her last major such work, Metaphysics as a Guide to Morals(1992, but based on lectures given in the 1980s), is huge (about 200,000 words). It rambles somewhat, but its main focus is on Plato’s strange belief that knowing the good will somehow make you love it and hence behave in accordance with it.
When I first attempted to read it, I was disappointed in that anti-Platonist, expressivist theories did not seem to have been given a chance to thrive. I think better of it now.
And this author’s name, I hear you ask testily? More word games? Well, her forename is a common noun and so can hardly be subject to my original blanket ban on names. An iris is a kind of flower, as well as a part of an eye, so there is no problem there. The surname is more of a problem. It is a fairly common name, and it is certainly not her fault that it is also possessed by my least favourite media mogul – and (in my opinion) one of the most powerful and most dangerous people on the planet at this moment. So, I shall not mention it, though you would have to be fairly dull not to be able to work out what it is.
You are perhaps getting a little tired of my views about names. Primitive peoples have superstitious beliefs here, you might remark. If someone discovers your name, then (it is said) she will have captured your soul – just as she will have done if she successfully takes a photograph of you. Likewise, it is dangerous to call the devil by his real name, lest you thereby summon him (the phrase, ‘Talk of the devil …’, is generally understood to imply that the name and the person it names are much more intimately related than you might think).
But Russell was not superstitious in any very obvious way, and it is most famously he who thought that you can only name that with which you are ‘acquainted’, in his rather special sense of the word. If you take the view that all true identity statements are necessarily true, and if you also take the view that the necessary and the a priori are the same, then it will instantly follow that the objects of your acquaintance must be very close to your soul. We have already discussed this in 1.2.8. Logically proper names, as Russell called them, are therefore rather odd things; as are many aspects of his philosophy. But more on the theory of reference later.
Anyway, my book is also a main text, so to speak, sandwiched between two bookends, which I call the ‘Overture’ and the ‘Finale’, respectively. Together, they shape and modify the impact of the main text which is presented so as to resemble Kant’s critical body of work. The latter on its own would present a rather odd and indigestible meal, given that the whole is designed to be accessible to a much wider readership than any of Kant’s critical writings.
What exactly is this book about, you might ask (in exasperation)? Well, there is a table of contents which might help to answer that question. What sort of book is it? – you might ask again, irritably, especially if you are a publisher or publisher’s referee. Classifications are always a bit hazardous, and every work of art is, in a sense, unique; but I would regard this book as primarily an imaginative fantasy novel with a philosophical theme, rather than as a philosophical treatise presented in an unusual style. However, as is often observed, an ‘open text’ stands on its own – regardless of the intentions of its author and the reactions of its readers and critics.
And, of course, there is the telepathy aspect, the claim that telepathic communication is genuinely possible for human beings and that this book is (among other things) a training manual for would-be psychics. Is this for real (you might ask)? Or just another literary conceit? I am tempted to answer ‘both’, but realize that not everyone will appreciate the sentiment. However, once the book has been read in its entirety, it should be a little clearer what I am getting at.
Unfortunately, telepathy does not always give me foresight, so I cannot say what the reception of this work will amount to – not even in California, let alone Bloomsbury (which, for your information, is a very refined part of London, among other things). There is never a very good excuse to recite Scottish poetry, especially if it is Burns and not Scott; but I shall do it anyway, as this may be the last time I can do it safely:
Now, wha this tale o’ truth shall read,
Ilk man and mother’s son, take heed,
Whene’er to drink you are inclin’d,
Or cutty-sarks run in your mind,
Think, ye may buy the joys o’er dear,
Remember Tam o’ Shanter’s mear.
Burns night is celebrated in many places, including my home town. You may think that a ‘cutty sark’ is a kind of strong drink, or even a high-class yacht on which you might consume the stuff, but you are only half-right. In actuality, original cutty sarks are a bit like wide-brimmed floral hats: and they can get into your mind in an entirely inappropriate way. Reader, beware! And enjoy.
****
TABLE OF CONTENTS
1 – Overture
1.1 A DAY IN THE LIFE
1.1.1 Preamble
1.1.2Sixpence none the richer
1.1.3 A plethora of ideas
1.1.4 Welcome week
1.1.5 Academic dishonesty
1.1.6 The academic integrity officer? C’est moi!
1.1.7 The queen of the sciences
1.1.8 Disagreements and other difficulties
1.1.9 A vision in white
1.1.10 Reading the riot act
1.1.11 Burgundy versus claret: a critical reappraisal
1.1.12 California dreaming
1.1.13 My last ever office hours?
1.1.14 Homeward bound
1.1.15 Dulce domum
1.1.16 Night, all!
1.2. SATURDAY MORNING
1.2.1 The Ting Tings
1.2.2 The passing of the possible
1.2.3 Epistemic and metaphysical possibility
1.2.4 Necessity, universality and the a priori
1.2.5 Modal epistemology
1.2.6 The same subject continued
1.2.7 An identity crisis
1.2.8 Some more identity crises
1.2.9 Cigar manufacture and a pyjama drama
1.2.10 Ideas and the birth of an ology: 1
1.2.11 Ideas and the birth of an ology: 2
1.3 LAZY SUNDAY AFTERNOON
1.3.1 The kinks and the small faces
1.3.2 Theatre, cinematography and Roman history
1.3.3 The manipulation of visual ideas
1.3.4 Some more imagery
1.3.5 Interlude: a word from our sponsor
1.3.6 Drawing the threads together
1.3.7 Running out of time
1.3.8 Neachy is peachy, but Froyd is enjoyed
1.4 IN THE HEAT OF THE NIGHT
1.4.1 Another word from our sponsor
1.4.2 A feature-placing language
1.4.3 The ghost in the machine
1.4.4 Poltergeists, zombies and the autistic
1.4.5 The Midwich Cuckoos
1.4.6 Some textual analysis
1.4.7 The principles of politics
1.4.8 Expressing games with sartorial elegance
1.4.9 On assertion and assertiveness 1: Kant
1.4.10 On assertion and assertiveness 2: the phenomenologists
1.4.11 A Critique of Pure Practical Reason: thick concepts
1.5 EARLY IN THE MORNING
1.5.1 ‘Aubade’ (a.k.a. ‘early morning serenade’)
1.5.2 A Critique of Judgment: the whys and the wherefores
1.5.3 The Black Prince