Apoha: Meaning Via Exclusion

At its core, the raw content of our experience consists of a distinct set of sense perceptions that we’ve perhaps never encountered before. And yet, we somehow manage to make sense of them, by mapping these unique concrete particulars to more familiar abstract universals.

When browsing in a store, you come across plenty of objects that you’ve never seen before. But associating them with a more general class of things that you’re familiar with (shirts, pants, shoes, etc.) helps you reason about their nature and purpose.

Generalities are also deeply ingrained in the way we communicate using language. When analyzing the semantics of Sanskrit in his Mahābhāṣya, the grammarian Patañjali, discusses the difference between referring to an individual cow and the idea of cowness.

The abstract property is universally shared by all cows, but functionally inert on its own — you can’t milk the notion of cowness!

It’s this abstract commonality, however, that allows us to to refer to a plurality of instances as a group — say cows instead of individually listing out each of them. And also make general pronouncements not tied to any particular instance — like to avoid cow slaughter.

From this, the grammarians explicate the concept of universals (jāti), denoting a general class of properties inherent in many concrete particulars (vyakti). While the role and utility of these universals is rarely contested, their exact ontological status has remained a hot topic of debate for millennia.

Among others, logicians of the Nyāya school believed that the sameness apparent in different particulars hinted at the presence of a real universal entity inherent in them all.

The Nyāyasiddhāntamuktāvalī defines a universal as, “that which, being eternal, is inherent in many things” (nityatve saty anekasamavetatvam).

These universal entities are believed to be the basis for the application of the same word to denote different instances.

They’re also considered to be immutable and eternal, since they appear to remain constant and outlast the destruction of all their individual instances. For example, we can still speak of dinosaurs today, even though all of them perished a long time ago.

To the Buddhist Pramāṇavādi, however, affirming the presence of a large number of these eternal, omnipresent and occultic entities was a major issue.

Paṇḍita Aśoka ridicules this view in his Sāmānyadūṣaṇa.

There are five fingers which are distinctly perceived. The sixth common of fingerhood is not directly perceived. He who sees a sixth apart from these, certainly also sees a horn on his head.

The medieval scholar Dignāga and his follower Dharmakīrti are largely credited with mainstreaming the focus on epistemology and linguistics in Buddhist discourse. Their approach to these topics fused an academic study of theoretical aspects with an introspective understanding of human cognition.

In the last chapter of Dignāga’s philosophical treatise, the Pramāṇasamuccaya, he develops a theory of language that manages to explain universals without requiring the presence of an everlasting, extrasensory entity.

Rather than treat general properties as intrinsic to external objects, his theory considered them to be mental constructs, that were the result of the linguistic and inferential natures of thought.

This, however, raises the question of the exact mechanism of action.

Verbal cognition (śabdam) is not a means of cognition separate from inference (anumānāt). That is, a word denotes (bhāṣate) its own referent (svārtham) by exclusion of other referents (anyāpohena).

Following from this definition, it would appear that the essential function of the word cow, lies in excluding all non-cow particulars.

On first glance, this sounds slightly ridiculous — like an overly smug answer to a question one knows nothing about. But a deeper analysis reveals something different.

To provide some background, according to this theory, the result of direct sensory perception (pratyakṣa) is momentary, non-conceptual and beyond linguistic representation.

These sense perceptions give rise to unique phenomenal forms, which represent the individual character (svalakṣaṇa) of the particular object.

Words and concepts, on the other hand, operate in the realm of general properties (sāmānyalakṣaṇa).

A fundamental assumption of the apoha theory is the connection between verbal semantics and logical inference — which Dignāga claimed share the same underlying basis.

Like logical indicators, words and concepts are mental signs that indicate the nature of the particular phenomenal forms they referred to.

According to Dignāga’s theory of logic, the act of inference is based on the presence of a mark or sign (liṅga) that indicates some aspect of knowledge about an object. There are three conditions (trairūpya) crucial to establishing the validity of a sign.

  1. It is present in the current case (pakṣa).
  2. It is present in similar cases (sapakṣa).
  3. It is absent in contradictory cases (vipakṣa).

The first condition is essentially tautological. The second condition is necessary but not sufficient on its own. The third condition, however, is key to solidifying the invariable connection (avinābhāvin) between sign and object.

Some may contest the need for the third condition, claiming knowledge of similar prior instances is enough to make an implication. Here, however, we encounter the well-known problem of induction.

Is the sighting of a few white swans enough to make the generalization that all swans are white? To do so is to indirectly claim knowledge about every possible instance of a swan.

The use of the third condition here keeps us grounded in exact particulars and allows us to perform the inference without making universal generalizations about the essential nature of all swans.

The absence of contradictory cases reinforces the sign, while the presence of contradictions, like a black swan, invalidates it.

A similar semantic consideration motivates the apoha definition of a word as an exclusion. Basing the meaning of a word on a universally real entity, implicitly asserts familiarity with some aspect of all of its possibly infinite instances — this is humanly impossible.

However, framing the meaning of a word as the exclusion of other referents does away with this problematic claim of partial omniscience. Since, now, the utterance of the word swan merely indicates the absence of non-swan particulars at the point of reference.

Unlike the potentially infinite cardinality of a universally affirmed entity, the cardinality of absence is — a very finite — zero.

Evidence for the apoha theory is also found in the analysis of how we build sentences to convey meaning.

Consider the word “lotus”. According to the apoha theory, this word conveys some meaning by excluding all non-lotuses at the point of reference.

Now, the addition of meaningful specifiers to the word, like the word “blue” will necessarily exclude more entities. The phrase “blue lotus” is strictly more exclusionary than just the word “lotus”, as it excludes pink lotuses, white lotuses, etc.

Dignāga also supports his thesis by describing the process of learning the meaning of a word (vyutpatti) in the context of his theory.

He considers the case of someone being taught the meaning of the word “tree” through demonstration using a prototypical example (dṛṣṭānta).

The instructor points to an example object and says, “This is a tree”. The joint presence of the word “tree” and the phenomenal form of the object creates a mental imprint (vāsanā) in the mind of the student.

The association is then reified through the joint absence (vyatireka) of the same word and similar objects at other referents.

What this means in practice is that the student doesn’t observe the word “tree” being applied to other objects that are non-trees. Dignāga claims that this non-observation plays a crucial role in the understanding of word meanings through exclusion.

It’s important to note that the exclusion here applies to the particular objects that are the referents of words. Words can still have relations between themselves.

The same prototypical example may be used for the word “oak”. And observing that the word oak doesn’t apply to other trees that aren’t oaks, the student can infer that oaks are a subset of trees.

This example also establishes the apoha view of classes as causally constructed mental models, as opposed to being foundational axioms of reality.

On way to view the apoha theory, is as a kind of metaphysical minimalism. Since it only requires commitments to exclusion and perceptible particulars as fundamental primitives, it is much more concise than other realist alternatives that postulate a number of universally real properties.

Another interpretation of this theory is as a response to a naive positive (vidhirūpa) realism about the nature of classes. This is apparent in Dharmakīrti’s analysis of the class of fever-reducing medicines.

The naive realist will claim that the singular universal property of being a fever-reducer is possessed by all the different medicines. But in reality, each medicine has a different mechanism of action and varying efficacy based on situational conditions.

The apoha definition of the term fever-reducing, however, merely excludes non-fever-reducing substances and doesn’t require any positive assertions on the essence of fever-reduction.

In HBO’s hit TV show, Silicon Valley, there is a scene where one of the characters demos an app that correctly identifies the item of food in a picture as a hotdog. He is enthusiastically prompted to try it on a slice of pizza next. In a rather anticlimactic twist, the app simply says: “Not hotdog!”

Although the skit is largely meant to parody the state of today’s startup ecosystem. It serves as a somewhat accurate — albeit simplistic — depiction of modern artificial intelligence that raises an interesting philosophical point.

Contemporary approaches to machine classification don’t depend on the detection of some latent universal essence in the objects they seek to classify. Or rely heavily on positive descriptions of the target classes. Instead, they make use of machine learning techniques that employ a trained response to an error signal that seeks to exclude negative examples.

With every affirmative definition of a class, there is an implicit complement. Viewed this way, it appears that the telic function of a class is to discriminate between its positive and negative instances.

As Dharmakīrti says in his commentary on the Pramāṇavārttika.

There can be no affirmation (anvaya) of a thing which does not exclude (vyāvṛtti) the other; nor can there be a negation of that which cannot be affirmed.