About Me



KISS Linux

A History of the Analytic Tradition

Analytic philosophy had its birth near the beginning of the 20th
century. Major developments in the analytic tradition were made by some of the
most prominent names in philosophy, amongst them including the likes of Bertrand
Russell, A.J. Ayer, Ludwig Wittgenstein, and Wilfrid Sellars. The best way to
characterize these four philosophers with respect to their work on knowledge and
empiricism is into two camps. The first camp, home to Russell and Ayer, is the
foundationalism camp. The second, which includes Wittgenstein and Sellars, is
holism. The primary distinction to be drawn between these two kinds of knowledge
is that the foundationalists believe that there is something which grounds all
our knowledge claims, while the holists take it to be the case that knowledge
arises from a sort of embedding within a linguistic framework. 

Historically speaking, the analytic tradition - and primarily Russell's work in
'Philosophy of Logical Atomism' - arose as a response to idealism. Russell
sought to unify our linguistic practices with the way the world is. Russell, as
a realist, felt that our words and utterances should carve reality at its
joints. If it were the case that what we said about the world was not true of
reality, then everything which we say would be radically mistaken. If language
is meant to be intentional, it must stand in a sort of relation to something
other than itself.  Language must represent the world. Indeed, if they idealist
were ultimately correct about what they say of the world, it would seem that
nobody would be able to agree with each other about facts of the matter. Thus,
if we take ourselves to be speaking of the world rightly, the things we say
should be taken to be of a proper partitioning of the world. In this way, our
language represents the world as it is and our experiences of it are real. In
addition, Russell held what he called the 'common-sense belief' that there are
many separate things. Taken jointly, these points lead Russell to the
development of logical atomism, a philosophical thesis which sought to break
language and, as a result, the world, into small, logically independent bits,
called atoms, through a process of ongoing analysis. The analysis is a matter of
theory and practice to break apart the experiences we have of the world into its
smallest constitutive bits, called sense data. These constitutive bits are the
simplest things, the stuff which the complexes we experience are made up of. It
might include a claim such as 'there is a bit of red in my visual field', or
some such similar statement about color, shape, or size. These simples are the
logical atoms of the world, where experience and analysis bottom out. This
'bottoming out' happens at the level of objects to which we have direct
acquaintance with. Indeed, analysis only happens on those objects which are
complex, built out of simples, and the final bits of our analysis are with
'direct acquaintance with objects', those which relate to our most simple
symbols in our language. One of the primary problems with this focus on an
analysis of experiences into logical atoms was that these atoms were hard to get
at. Indeed, Russell himself seems uncertain if such an analysis is even possible
in principle, or if it ever manages to 'bottom out' into actual atoms instead of
just what we take to be atoms because we can not manage to reduce them any
further. Indeed, Russell admits that the process of analysis could go on ad
infinitum, never terminating into a simplest thing, always becoming simpler'. It
is for this reason that Russell had a theory of 'forced acquaintance'. What the
theory amounts to is that, in light of the fact that an analysis could continue
indefinitely, we should consider something to be an atom because it is an atom
_for us_. In essence, because our logical symbols are only so simple and our
capabilities of analysis only so well refined, what we take to be simples are in
fact the simpler things we could arrive at without further refinement. As a
result, we are left to call these final items atoms. Considering these troubles,
Russell maintained that at the very least an analysis could be done, and we
could arrive at the foundations of our own experience which aligned with the
reality we experience.

From this position of justifying knowledge by way of our experiences, the
positivist tradition which arose from this work provided a rigorous, logical way
of demonstrating truth. A.J. Ayer developed the verification principle in his
'Language, Truth, and Logic'. The principle of verification was a thesis to
provide a method by which we can solve the problems with reference in language.
Russell took it as an assumption that language stood in some sort of relation to
the world, while the positivists attempted to provide a way by which we could
link up the reference of language with the world we experienced. A sentence
would have meaning just in case it was analytic, or if it could be verified by
some collection of observations that could, in principle, be done. What this
account of meaning entails is that there are certain sorts of questions which
are literally meaningless. That is to say, a sentence is literally meaningful if
and only if the means by which it can be verified are known and, in principle,
doable. Those statements which are not analytic a priori like mathematics or
tautologies must be verifiable in order to have any sort of meaning or sense.
Ayer made available several distinctions that could be made in hopes to weaken
and strengthen the principle in response to criticisms he received. For
instance, there is a difference in the kinds of verifiability we should be
concerned with. Certain things are verifiable only in principle if the practical
means to perform certain observations are unavailable. Likewise, some things are
verifiable in practice; I can put myself in a situation where I would easily be
able to verify a claim with some observations. For instance, I cannot verify the
existence of the electron without the assistance of, perhaps, an electron
microscope or perhaps some relatively strong magnets. But this discrepancy
should not lead me to the conclusion that talk of electrons is nonsense. I
should instead trust the scientist is not fooling me and that, if I had the
proper tools and scientific acumen, I would likewise be able to recognize the
subatomic particle he calls the electron. There's also the case of how strong
such observations might have to be in order to demonstrate the truth or falsity
of a claim. Indeed, Ayer has a notion of both strong and weak verifiability, in
which the certainty of the claim is absolute or only rendered more probable,
respectively. For Ayer, factual meaningfulness simply requires weak
verifiability in principle. This means that I need only have the opportunity as
a possibility to perform certain observations, and those observations need only
make the claim more probable than not for it to be true. As it turns out, a
majority of the sentences which are meaningless are metaphysical ones. For a
positivist, the question of whether the absolute is totally good is akin to
asking the question how big is the color blue. These sentences have no possible
observations that could be done, even in principle, which could lead to their
verification. As a result, metaphysics is entirely eliminated from meaningful
philosophical discourse. Given the strength of the verification principle, the
question must of course be asked of what exactly it might mean. There are two
options: either the principle is analytic, or there are a set of observations
which can in principle be made to make its truth more probable than not. As it
turns out, neither of these are the case.  Indeed, the sorts of propositions
taken jointly with the verification principle that would entail some sort of
observation not entailed by those propositions alone seems illusory at best. It
is unclear what such an observation we might gain with the verification
principle would look like. If no such observation is possible, then perhaps the
proposition is itself analytic, true just in virtue of its symbols and what it
represents. However, the negation of the principle does not seem to entail any
sort of contradictions at all. Taking into consideration the fact that the
principle does away with all of metaphysics as literally meaningless, it seems
very much more likely that the principle itself is false. After all,
metaphysicians have taken themselves to be doing quite serious work. As a
result, the most damaging charge against the thesis of verification is that it
is self-refuting, and this fact led to the fall of the positivist school of

While Wittgenstein himself was a supporter of the verification principle in his
'Tractatus', a major change can be seen to have occurred when he wrote 'On
Certainty'. This is the point in the analytic tradition where we can see a major
refocusing of efforts by people to think about knowledge and meaning
holistically as opposed to atomistically or foundationally. Instead of examining
meaning for language in terms of observations or tautological fact, Wittgenstein
took the meaning of words to be from language users themselves. An utterance,
for the later Wittgenstein, has meaning in virtue of how it is used by language
speakers. These language users are following particular kinds of grammatical
rules when they speak. These rules are what make up what Wittgenstein calls a
'language game'. The project he embarked on essentially aimed to silence the
skeptic. When the philosopher speaks of certain things like 'I know I have a
hand', we might perhaps look at him incredulously; nobody, after all, was
questioning whether he knew he had a hand. There was no good reason to doubt
that a hand was there. The skeptic is misusing language when he asks certain
kinds of questions; he is violating the rules of the language game, on
Wittgenstein's view. These rules are what guide the language games we play and
learn, and these rules are, as a result, a part of logic. They are analogous to
the rules of a game of chess. There are good moves in chess like, for instance,
mating the king, and there are bad moves, like giving up your queen for a pawn,
or constantly retreating your pieces to never advance your own position.
Likewise, there are incorrect moves in a game of chess. If one were to pick up
the bishop and move it like a rook, we might slap the piece out of their hand
and yell at them for cheating. Worse than this, there are nonmoves. Perhaps the
opponent begins moving their pieces like checkers, or starts rolling dice to see
how many squares they may move each turn. We would say that these people are
simply not playing the same game.  Indeed, they are doing something quite
differently from us. We do not understand each other. But these distinct games
we may play are all descriptive of a logic.  At its heart, the fact of the
matter is that Wittgenstein has rejected atomic simples. Indeed, he points out a
fact about determinate exclusion. When an object is red, it means something much
more than just 'that object is red'. Indeed, it means that that object is not
blue. This point demonstrates the interconnectedness of our concepts, and that
any claim about knowledge is intimately linked with others. This position came
about in part due to Wittgenstein's turn towards inferentialism. Of course, the
problem presented by inferential knowledge is the question of what concepts does
the inferential claim interact with in generating knowledge.

This development in Wittgenstein's philosophy can be recognized within Sellars'
work in 'Empiricism and Philosophy of Mind'. The theory which Sellars paints for
us seems to be the most convincing one of everything we studied this semester.
Perhaps the most important part of the projects discussed thus far is, if we get
rid of logical atoms or verifiability of statements in a rigorous way, by what
token do we accept knowledge claims? The problem with inferential knowledge is
that it doesn't seem to interact with any other epistemic concepts in a way that
should lead to knowledge. Indeed, 'the given', the most simple of sense datum,
cannot be epistemic facts, and they likewise cannot be nonepistemic facts. The
crux, for Sellars, is that because our concepts are so deeply interconnected and
our linguistic practices so heavily intertwined with the ways we articulate our
knowledge claims, our language usage must come prior to our concept acquisition.
In this way, I do not know that something is red because when I observe it I
recognize a bit of red in my visual field, and infer redness from this
experience. Instead, I know that something is red because I can support and
justify my claim 'that is red'. Indeed, because we can report our experiences
and partake in justificatory practices, we are credited with knowing.


Dilyn Corner (C) 2020-2022