The Tetrast

Deductive vs. ampliative; also, repletive vs. attenuative

August 15, 2015.

Latest significant edit: October 18, 2015. More revisions to come.

We have the terms ‘deductive’ and ‘ampliative’ for inferences wherein the conclusions don’t or do, respectively, add something beyond that which the premisses give. Charles Sanders Peirce used the term ‘ampliative’ as equivalent to ‘non-deductive’ (as discussed at this post’s end). But I can’t find generic terms for inferences wherein the conclusions don’t or do, respectively, omit something given by the premisses.

So, I picked out a couple of words — repletive and attenuative — that people might find handy. ‘Repletive’ ought to be pronounced re-PLEE-tiv, to rhyme with ‘depletive’ and ‘completive’ (as they ought to be pronounced). I first discussed the words in a post “Inference terminology” to peirce-l 2015-04-07. (Under “Word choices” below, I discuss why the words seem better choices than others.)

(‘Entail’ = ‘deductively imply’.)

Deductive =
Everything (explicit or entailed) in the conclusion is (explicit or entailed) in the premisses.
Ampliative =
Something (explicit or entailed) in the conclusion is not (explicit or entailed) in the premisses.
Repletive =
Everything (explicit or entailed) in the premisses is (explicit or entailed) in the conclusion.
Attenuative =
Something (explicit or entailed) in the premisses is not (explicit or entailed) in the conclusion.

Summary of properties from perspectives of proof theory & model theory
(not that I know much about mathematical logic):
Deductive:The premisses entail the conclusions.Automatically preserves truth.
(i.e., non-deductive):
The premisses do not entail the conclusions.Does not automatically preserve truth.
Repletive:The premisses are entailed by the conclusions.Automatically preserves falsity.
(i.e., non-repletive):
The premisses are not entailed by the conclusions.Does not automatically preserve falsity.

Each such property has its merits or virtues, as well as drawbacks, in inference:

(Of course elementary inference, as a topic, does not exhaust the story of security and opportunity in inquiry generally.) Each virtue comes with a diametrically opposed drawback. As risk managers like to put it, opportunity equals risk. By the same stroke there is a sense in which security, safeness, equals futility — ‘nothing ventured, nothing gained’. Freud made much from the fact that one tends to have less choice between pleasure and pain than between both and neither. Still, four combinations of the above properties are possible in inference:

Intersections of classes of inference:
Inferences Deductive: Ampliative:
Repletive: ‘Reversible’ (i.e., equipollential, a.k.a. equivalential) deduction. Induction, as one often thinks of it (but often not as it is actually framed*).
Attenuative: ‘Forward-only’ deduction.
Surmise, conjecture, abductive inference (and often induction as actually framed*).

* Note on how induction is framed or expressed: For example, ‘ of this actual sample is blue, so (likely) of the total is blue’ would usually be considered inductive. Still, it’s not only ampliative, it’s also attenuative. The conclusion that of the total population is blue does not entail the premiss that of this actual sample is blue, even though one usually thinks of induction as inferring from a part to a whole including the part. See below, under “Fairly framing the inference”.

Building a systematic view.

Inferences may be worth classifying in the above four-fold manner because, if the classification works (in particular, if all induction ‘rightly framed’ is repletive as well as ampliative), then four major inference modes can be defined in a uniform ‘hard-core’ formal manner that exhausts the possibilities, by their basic internal entailment relations (or preservativeness or otherwise of truth and of falsity); meanwhile their various attempted heuristic merits — abductive plausibility (natural simplicity), inductive verisimilitude / likelihood (in C. S. Peirce’s sense), ‘forward-only’-deductive novelty, and equivalential-deductive nontriviality / depth — can be treated as forming a systematic class of aspects of fruitfulness or promisingness of inference, with each of them related (as the compensatory opposite, in a sense) to its respective inference mode’s definitive internal entailment relations. Those heuristic merits are difficult to quantify usefully or even to define exactly; yet, together with the entailment relations, they illuminatingly form a regular system in which each heuristic merit helps to overcome, so to speak, the limitations of its inference mode’s definitive entailment relations. At any rate there is a fruitful tension between the heuristic merit and the entailment relations in each inference mode.

Entailment relations and counterbalancing heuristic merits
(assuming that induction is repletive as well as ampliative)

The propositional schemata below exemplify entailment relations without heuristic merits.
Inferences Deductive: Ampliative:
Repletive: ‘Reversible’ deduction, e.g.:
pqpq. Logically simple.
Compensate with the
nontrivial, complex, deep.
Induction, e.g.:
pqpqr. Newly adds claim(s).
Compensate with
verisimilitude (conclusion’s
likeness to the old claims).

Attenuative: ‘Forward-only’ deduction, e.g.:
pqrpq. Claims less, vaguer.
Compensate with
novelty, by concision, of aspect.
Abductive inference, e.g.:
pqqr. Logically complicated.
Compensate with
natural simplicity
(abductive plausibility).

Notes about the above table:

The ‘hard-core’ definitions often do little to help one decide whether an actual inference in the course of thought is rightly or fairly to be framed as inductive or abductive or otherwise; and considerations of the inference’s generic sought heuristic merit (such as plausibility), its mode of promise or fruitfulness, seem often essential. It may seem that one can have the more-formal definitions paired one-to-one with some more-functional definitions; yet, for example, deduction is not defined as explicating, bringing the implicit newly to light, since an inference in the form ‘p, ergo p’ is deductive but its conclusion extracts no new or nontrivial perspective from its premisses; on the other hand, if no deduction were ever to make explicit the merely implicit, then no mind would bother with deductive reasoning. The heuristic merits deserve attention because, in the pervasive absence of all the heuristic merits, no mind would bother with reasoning — explicit, consciously weighed inference — at all. Deduction would lose as much in general justification and rationale as any other inference mode would. Little in general would remain of inference, conscious or unconscious, mainly such activities as remembering, and free-associative supposing.
Inferences sapped of heuristic merits
Even these in their seasons have other merits
Inferences Deductive: Ampliative:
Repletive: ‘Reversible’ deduction, e.g.:
… ∴ p, ∴ p, ∴ p, ∴ … .
Fixated remembrance.
Induction, e.g.:
… ∴ pq, ∴ q, ∴ qr, ∴ … .
Swollen expectation.

Attenuative: ‘Forward-only’ deduction, e.g.:
… ∴ pq, ∴ q, ∴ qr, ∴ … .
Constricted notice.
Abductive inference, e.g.:
… ∴ p, ∴ q, ∴ r, ∴ … .
Wild supposition.

Quite unpromising, and vacuous in that sense, is an ampliative reasoning without plausibility or verisimilitude, though it have ‘blackboard validity’ (a phrase I borrow from Jeff Downard) as an ampliative inference (i.e., the premisses don’t entail the conclusion). The proposed definitions of inductive and abductive inference admit pointless inferences, but the standard definitions of deduction admit pointless inferences as well. In the study of arguments, most of the theoretical interest will be in schemata and procedures that not only (A) bring ‘blackboard validity’ — conformity with the definition and concomitant virtues of the given basic mode — but also (B) ensure some modicum of the inference mode’s correlated mode of promise, such as natural simplicity (Peirce’s forms), a new aspect (the categorical syllogistic forms), etc. Together, such conformity and such promise relate inference modes to distinctive aims at explanation, prediction, etc.

Still, unless the question of whether induction’s essential form is repletive as well as ampliative is settled in the affirmative, it is best to continue defining abductive inference as inference to a (more or less plausible) explanation, but one could coin a term such as ‘aliduction’ for inference both ampliative and attenuative, so that the questions become, is all abductive inference aliductive? and vice versa? (One could likewise coin ‘pluduction’ for repletive ampliative inference and ask whether all induction, rightly framed, is pluductive, and vice versa. ‘Equiduction’ and ‘minuduction’ respectively for ‘reversible’ and ‘forward-only’ deductions might offer some convenience, too.)

Word choices.

‘Repletive’ seems better than ‘retentive’ (although maybe it’s just me), because ‘retentive’ suggests not just keeping the premisses, but restraining them or the conclusions in one sense or another. The word ‘preservative’, to convey the idea of preserving the premisses into the conclusions, would lead to confusion with the more usual use of ‘preservative’ in logic’s context to pertain to truth-preservativeness (and falsity-preservativeness). If people dislike the word ‘repletive’ for the present purpose, then I suppose that ‘transervative’ would do.

‘Attenuative’ seems much better than ‘precisive’ or ‘reductive’ for non-repletive inference. The word ‘precisive’ seems applicable only abstrusely to an apparently dis-precisive inference in the form of ‘p, ergo p or q’. More to the point, attenuative inference generally seems to involve a narrowing of logical focus but that is an effect on perspective; logically it increases vagueness and that is how it can narrow logical focus. ‘Reductive’ may be less bad than ‘precisive’ in those respects but is rendered too slippery by irrelevant senses clinging from other contexts and debates.

Fairly framing the inference.

Inductive vs. abductive.

The Peirce scholar Nathan Houser said in “The Scent of Truth” (Semiotica 153—1/4 (2005), 455–466), “But now that abduction is taken seriously, and so much attention has turned to its examination, we find that it is indeed a very slippery conception.” A gain from the ‘hard-core’ definitions based on entailment relations or, just as well, on truth/falsity-preservativeness, would be a non-slippery definition of abductive inference (as inference that is both ampliative and attenuative — the premisses neither entail, nor are entailed by, the conclusions). The very idea of inference by way of both-ways non-entailment evokes the notion of somewhat leaping, a guessing; for what it’s worth, it evoked that notion dauntingly for me before I read Peirce or heard of abductive inference.

So defined, and distinguished as attenuative from induction as repletive, abductive inference would have the autonomy that Tomis Kapitan, for example, found lacking (in “Peirce and the Autonomy of Abductive Inference” (PDF), Erkenntnis 37 (1992), pages 1–26). Ideas about natural simplicity, explanatory power, pursuit-worthiness, etc., which contribute to the current slipperiness of definitions of abductive inference, would instead be further salient issues of abductive inference, neither explicitly contemplated in its definition nor incorporated into the content of all abductive inferences (which incorporation, besides the problems that Kapitan finds, would make one abductive inference into many, just by people’s differing soever fuzzily in the amounts of plausibility, economy, pursuit-worthiness, etc., that they assert in it), just as the somewhat slippery ideas of novelty, nontriviality, predictive power, etc., are further salient issues of deduction, neither explicitly contemplated in its standard definitions nor incorporated into the content of all deductions (and such couldn’t usefully be done deductively). One could still qualify the illative relation by saying ‘therefore, abductively,’ or ‘therefore, deductively,’ or whatever. Yet, some slipperiness remains, and it is not only in the heuristic merits sought in inference but also in regard to the use and applicability of the definitions themselves. Induction as actually framed is often not only ampliative but also attenuative — the conclusions do not always entail the premisses, even though one usually thinks of induction as inferring from a part to a whole including the part.

There are differing ways to reframe the inference ‘⅗ of this actual sample is blue, so (likely) ⅗ of the total is blue’ so that its conclusion will entail its premiss, ways that are perhaps to be favored over the example if they reflect better the inquirial interest involved in induction. One could say ‘some subset’ instead of ‘this actual sample’, or characterize this actual sample in the conclusion as well as the premiss (like a concluding graph that represents the actual measurements with a darker line). However, take an inference that is very elementary and not especially statistical: ‘There are blue flowers, ergo there are no non-blue flowers.’ The conclusion expresses logicians’ conventional reading of the sentence ‘all flowers are blue’ and does not entail the premiss or even that there are any flowers (or any blue things) at all. The conclusion does not have much verisimilitude in C. S. Peirce’s sense, i.e., similarity of the concluded scenario to the premissual scenario. Does the conclusion bring to the premiss more of an inductive echo or more of an abductive simplification? Also, it is a silly toy example and could just have well concluded ‘there are no non-flower blue things.’ Anyway, if one infers instead, ‘There are blue flowers, ergo there are flowers but no non-blue flowers’ (or instead ‘...ergo there are blue things but no non-flower blue things’), then the silly toy’s conclusion entails the premiss and has more verisimilitude (again in Peirce’s sense), at least more than before.

If, on the other hand, one concludes in a rule such as ‘there are no non-blue flowers’ without explicit reference to the all-blue non-empty set of flowers asserted in the premisses (or to a superset of it), then the logical focus has been both ampliated and attenuated to a focus on a universal rule itself, a rule extendable indefinitely or as if indefinitely, and that is an idea new enough to the premissual case to count as abductive, even if no peculiar individual condition is abduced along with the new rule to help explain the observations in the premisses. Often enough some of the content of such guessed rules — content included in order to render the guess simpler, more natural, in some respect — is not distinctively exemplified in the cases observed so far and is an idea new in that sense too. So, a definition of abductive inference as ampliative-cum-attenuative inference covers not only the idea of inferring by extension of a rule from particulars to particulars, but also the idea of abductively guessing, from existential particulars, a conclusion consisting of hypothetical universal that (A) does not, by itself, entail positive instances (such as ‘GH’ where the rule is ‘(GH)’) and is not conjoined (within the guessed conclusion) with mention of them, and (B) is not conjoined (within the guessed conclusion) with an abduced hypothetical explanatory individual circumstance. Peirce approaches somewhat an idea of this kind of abductive inference in 1903, see Essential Peirce v. 2, p. 287, passage at Commens. There Peirce writes of abductively inferring by generalization. In “Upon Logical Comprehension and Extension” (1867, see Collected Papers v. 2 ¶422 or Writings v. 2, p. 84), he wrote “Generalization is an increase of breadth and a decrease of depth, without change of information,” that is, without change in the amount of breadth (extension, a.k.a. denotation) times depth (comprehension, a.k.a. connotation). In “A Guess at the Riddle” (1877–8 draft, see Essential Peirce v. 1, p. 273), discussing evolution, he wrote, “The principle of the elimination of unfavorable characters is the principle of generalization by casting out of sporadic cases, corresponding particularly to the principle of forgetfulness in the action of the nervous system.” So we end up with the idea not only of abducing a rule but also, in so abducing, omitting some characters mentioned in the premisses; the generalization is selective. Still, in 1902 (as quoted in this post’s final paragraph), before he discussed abductive generalization in 1903, Peirce wrote that he had previously taken the doctrine of logical extension (objects) and comprehension (characters) as more fundamental than it really is for understanding abductive inference.

I think that we usually and rightly think via conjunctive compounds of logical quantities, and that the recognition that we do so can bring some clarification of the difference between abducing to a hypothetical universal and inducing to a larger or total population, superset, etc. See the post “Logical quantity & research scopes - universal, general, special, particular, individual, singular”.

Deductive vs. ampliative.

Now, the deductive validity of some schemata in logic, such as ‘∀G∴∃G’, depends on whether one has stipulated that the universe of discourse is non-empty. Stipulating that deductive validity shall exclude the empty universe amounts to saying, not merely ‘there exists something’, but ‘let every proposition entail that there exists something’ or, equivalently, ‘Let ‘truth’ (‘T’) be formally equivalent to ‘there exists something’’. Generally, I fret that specially stipulated rules of formal implication can lead to complications in distinguishing inference modes from one another. I haven’t seen such issues discussed in texts on classification of inference modes. 1. Maybe such issues are easily resolved. 2. Maybe it’s best to keep the simple things simple. 3. Maybe I’m in over my head, but I’ll continue my dive a bit further.

Abductive inference again.

Now, suppose that one says, ‘Let every proposition entail that, when it rains at night, the lawn is wet the next morning’. It would be, not a rule of strictly logical implication, but instead a rule of, say, meteorological implication, corresponding to a local natural law of weather. In that universe, the premiss that the lawn is wet this morning is entailed, deductively, formally implied, by the conclusion that it rained last night. That's a case where a typical scenario of abductive inference looks like the reverse of (attenuative) deduction, and such a character has been ascribed, by Peirce and others, to abductive inference. That view lends itself to one’s holding a premissual rule to be not just a premiss but a kind of standing given entailed, deductively, formally implied, by every proposition in that universe. If one still calls the resulting inference abductive, then one cannot define abductive inference strictly in terms of entailment relations, but has to resort to the comparatively slippery ideas of plausibility, aim at explanation, etc., in order to distinguish it from inductive inference. Yet, it is especially on the basis of its very aim at plausible explanation, that one would argue that one should not so frame the inference and that an involved reverse of an attenuative deduction can be adequately noted instead by saying that, in such an abductive inference, the conjunction of premissual rule and conclusion entail the premissual case that the lawn is wet this morning; in other words, the conclusion switches places with one of the premisses, not with both premisses conjoined. Yet, what if the rule is a rule of, say, special relativity? It’s difficult not to think of it, at least for comparatively practical purposes, as a standing given in our actual universe; special relativity is regarded as a practical certainty. There seems little if any reason not to be flexible and willing to accommodate such thinking within theoretical models, as long as it is understood that higher-level, theoretical rules chosen or tailored to reflect lower-level (e.g., empirical) rules are not actually true to the lower-level domain by mere definition or stipulation. In that case, the definitions of inference modes by entailment relations will still work at an elementary level that gives the reasoner a kind of basic compass, and one will simply need to keep in mind that allowing much freedom with the formal givens of the universe of discourse can lead to complications for the entailment-based classification of inferences. Put that way, it sounds unsurprising. Maybe I’m making too much of these complications. After all, we already have a situation in deductive logic itself where ‘∀G∴∃G is ampliative absent the stipulation of the universe’s non-emptiness, and deductive otherwise; nobody regards that as a deal-breaker for the ampliative-deductive distinction.

Fields that aim toward ‘reversible’ deduction, ‘forward-only’ deduction, induction, and abductive inference.

A topologist once told me that ‘These two statements are equivalent’ is itself one of the most common statements in mathematics. Millennia ago, Aristotle spoke of mathematical premisses and conclusions as tending to “reciprocation” (see Posterior Analytics, Bk. 1, Ch. 12, link to pertinent text). For example, in mathematical induction (actually a kind of deduction), one takes a thesis that is to be proved, and ‘translates’ it (a fairly simple step) into the ancestral case and the heredity, conjoined. Once they’ve been separately proved (such is the hard part, also, I’ve read, often by equivalential deduction, except, for example, when greater-than or lesser-than statements are involved), then the mathematical induction itself, the induction step, consists in ‘translating’ the conjunction of ancestral case with heredity back into the thesis, demonstrating the thesis. The reasoning in pure mathematics tends to be translative or transformative, from one proposition (or compound) to another proposition equivalent to it and already proved or postulated, or just easier to work with for the purpose at hand. Pure mathematics has far-reaching networks of such bridges between sometimes the most disparate-seeming things. When one’s scratch work proceeds through equivalences from a thesis to postulates or established theorems, then one can simply reverse the order of the scratch work for the proof of the thesis. Reverse mathematics, a project born in mathematical logic, takes up the question of just which mathematical theorems entail which postulates as premisses. This shows again the prominence of deduction through equivalences in pure mathematics; the reverse of the reasoning in pure mathematics is typically still reasoning by pure mathematics (even if with inquisitive guidance from mathematical logic).

In an example contrasting to that, deduction of probabilities and statistical induction, two neighborly forms of quite different modes of inference, are seen as each other’s reverse or inverse, deduction of probabilities inferring (through ‘forward-only’ deduction) from a total population’s parameters to particular cases, and statistical induction inferring in the opposite direction (e.g., in Devore’s Probability and Statistics for Engineering and the Sciences, 8th Edition, 2011, beginning around “inverse manner” on page 5, into page 6). Now, inferential statistics should not be nicknamed ‘inverse probability’, an obsolete phrase that comes from DeMorgan on LaPlace and refers to a more specific idea, involving the method of Bayesian probability. On the other hand, the inverse of mathematics of optimization actually goes by such names as inverse optimization and inverse variations. Inverse problem theory seems defined more broadly than intended, so broadly as implicitly to encompass any inductive field, such as inverse optimization and inferential statistics, that is inverse to a deductive field. Such deductive fields seem to involve the development of applications of pure mathematics in order to address ‘forward problems’ in general, the problems of deducing solutions, predicting data, etc. from the given parameters of a universe of discourse, a total population, etc., — which is a description that fits the deductive mathematics of optimization, of probability (and uncertainty in Zadeh’s sense), and of information (including algebra of information), and at least some of mathematical logic.

It is in the (comparatively) concrete sciences, the sciences of motion, matter, life, and people, that abductive inference takes center stage. I’ll add some discussion here later.

Technical discussion: Ampliative inference = non-deductive inference.

The question is: does the phrase ‘ampliative inference’ mean simply inference that is non-deductive (as I’ve taken it to mean), or does it mean inference that is both repletive and non-deductive?

Here are excerpts from the Century Dictionary’s definitions of ‘ampliation’ and ‘ampliative’, of which Charles Sanders Peirce had charge:

ampliation   (am-pli-ā´ sho̤n) […] — 3. In logic, such a modification of the verb of a proposition as makes the subject denote objects which without such modification it would not denote, especially things existing in the past and future. Thus, in the proposition, “Some man may be Antichrist,” the modal auxiliary may enlarges the breadth of man, and makes it apply to future men as well as to those who now exist.

ampliative   (am´ pli-ạ̄-tiv) […]   Enlarging; increasing; synthetic. Applied — (a) In logic, to a modal expression causing an ampliation (see ampliation, 3); thus, the word may in “Some man may be Antichrist” is an ampliative term. (b) In the Kantian philosophy, to a judgment whose predicate is not contained in the definition of the subject: more commonly termed by Kant a synthetic judgment. [“Ampliative judgment” in this sense is Archbishop Thomson’s translation of Kant’s word Erweiterungsurtheil, translated by Prof. Max Müller “expanding judgment.”]

  No subject, perhaps, in modern speculation has excited an intenser interest or more vehement controversy than Kant’s famous distinction of analytic and synthetic judgments, or, as I think they might with far less of ambiguity be denominated, explicative and ampliative judgments.     Sir W. Hamilton.

Century Dictionary, p. 187, in Part 1: A – Appet., 1889, of Volume 1 of 6, and identically in Century Dictionary p. 187 in Volume 1 of 12, 1911 edition. The brackets around the sentence mentioning Archbishop Thomson are in the original.

Peirce for his own part focused on the deductiveness or ampliativeness of inference, not of ready-made judgments (he once said that a Kantian synthetic judgment is a “genuinely dyadic” judgment, see Collected Papers v. 1 ¶ 475). Peirce argued that mathematics aims at theorematic deductions that require experimentation with diagrams, a.k.a. schemata, and that it concerns purely hypothetical objects. (So much for Kant’s synthetic a priori.)

Peirce’s examples of abductive reasoning had premisses that were not only far from entailing their conclusions, but also far (too far for a fair reframing to close the gap) from being entailed by their conclusions; his “ampliative” meant simply the non-deductive, not the both repletive and non-deductive. This was both (A) during the years that he treated abductive inference as based on sampling and as a rearrangement of the Barbara syllogism, and (B) afterwards, in the 1900s. In 1883 Peirce divided “probable inference” into “deductive” and “ampliative”, the latter including hypothetical (i.e., abductive) inference (in “A Theory of Probable Inference”). In 1892, Peirce applied the term “ampliative” to inference as non-deductive as follows in “The Doctrine of Necessity Examined”, § II, 2nd paragraph:

[….] Non-deductive or ampliative inference is of three kinds: induction, hypothesis, and analogy. If there be any other modes, they must be extremely unusual and highly complicated, and may be assumed with little doubt to be of the same nature as those enumerated. For induction, hypothesis, and analogy, as far as their ampliative character goes, that is, so far as they conclude something not implied in the premisses, depend upon one principle and involve the same procedure. All are essentially inferences from sampling. [….]

(Throughout the years, he usually regarded analogy as a combination of induction and hypothetical inference.) During the 1900s, Peirce ceased holding that hypothetical (a.k.a. abductive, a.k.a. retroductive) inference aims at a likely conclusion from parts considered as samples, and argued that abductive inference aims at a plausible, naturally, instinctually simple explanation as (provisional) conclusion and introduces an idea new to the case, while induction merely extends to a larger whole of cases an idea already asserted in the premisses. This does not mean that only abductive inference is ampliative; instead at most it means that only abductive inference is ampliative with regard to ideas, while induction is ampliative of the extension of ideas. (I’m unsure whether Peirce regarded abductive ideas as being definable by comprehension a.k.a. intension (as opposed to extension a.k.a. denotation); in a 1902 draft, regarding his past treatment of abductive inference, Peirce wrote, “I was too much taken up in considering syllogistic forms and the doctrine of logical extension and comprehension, both of which I made more fundamental than they really are.” — Collected Papers v. 2, ¶ 102.)

Plausibility, verisimilitude, novelty, nontriviality, versus optima, probabilities, information, n-ary givens

December 24, 2013.

Latest significant edit: September 30, 2015.

This post needs a lot of work, but I want to get it started.

The merits in inference in Table (B) are one-to-one reminiscent of, but NOT equatable to, the deductive topics in Table (A):

(A) Quasi-modal deductive topics
1. Optima & feasibles.
with directions.

2. Probabilities.

3. Information, 'news'.

4. Givens (data, etc.)
as n-ary complexuses.

Roots, bases, for arities,
allied to other relations,
e.g., transitivity.
(B) Merits in inference
1. Cogency,
natural simplicity

in surmise.

2. Verisimilitude (in
C.S. Peirce's sense)

in induction.

3. New aspect

4. Nontriviality
in equipollential

Both tables have the following pattern, with notable oppositions along the diagonals:

(C) Pattern in common
1. Simple, doable, compelling.

2. Likely, consistent.
 X  3. New, distinctive.

4. Complex, structured.

As to the ordering "optima, probabilities, information, givens [i.e., logic]," I didn't reach it by association with the striking pattern difference, ratio, logarithm, base (and anyway it's not clear to me why, from a purely arithmetical viewpoint, logarithm would precede rather than follow base there). Instead I had already reached the ordering as part of some broad correlations (see Table (D)).
(D) Some correlations
1.Optima & feasibles.
—Decision processes.
Motion, forces.
—Stochastic processes.
—Communicational &
 control processes.
4.Givens, data, facts, base̅s
(for further conclusions).
—Logic, intelligent processes.

All of the topics in Table (A) are the main topics of abstract and significant mathematically deductive areas concerned with structures of alternatives in timelike or (quasi-)modal perspectives. All the inference merits listed in Table (B) have familiar forms and some intellectual history.

1. An explanatory hypothesis's simplicity is a familiar idea. One version of it is parsimony or Ockham's Razor; C.S. Peirce held that logical simplicity is secondary and that at its extreme it would add no explanation to a surprising observation; he explored simplicity of explanation as plausibility, facility, naturalness, instinctual attunement, and Galileo's 'natural light of reason' (see the linked passage from "A Neglected Argument," 1908).

2. An inductive conclusion's verisimilitude, in Peirce's sense, is an idea familiar in the form of that which people mean in speaking of an induction as an inductive generalization, taking a sample (preferably a fair one) for a likeness of the whole.

3. New or novel aspect of conclusion of a 'forward-only' deduction such as a categorical syllogism has been noted by various people including Peirce, and they have typically seen it as pertinent to deduction generally. It is familiar in the sense that it is considered perspectivally redundant, for example, to conclude a deduction merely by restating a premiss in unchanged form.

4. Nontriviality or, in stronger form, 'depth' of a mathematical conclusion is a familiar idea among mathematicians, and has been an element in the formation of ideas of complexity. Conclusions in pure mathematics tend to be equipollent (propositionally equivalent) to their premisses, as Aristotle noted (Posterior Analytics, Bk. 1, Ch. 12, link to pertinent text).

Each fruitful aspect counterbalances, quasi-ironically, the essential structure of the entailment relations between premisses and conclusions that defines the inference mode to which the aspect pertains. So, the collectively systematic character of the definitive entailment relations is reflected in the collectively systematic character of the fruitful aspects.

1. A surmise's conclusion (with 'surmise' as defined below) is complex in the sense of both adding to, and subtracting from, what the premisses claim, and is of interest when it nonetheless brings a simple perspective.

2. An induction's conclusion is novel in the sense of adding to, but not subtracting from, what the premisses claim, and is of interest when it nonetheless brings a 'conservative' or 'frugal' (verisimilitudinous or 'likely') perspective.

3. A 'forward-only' deduction's conclusion retrenches in the sense of subtracting from, but not adding to, what the premisses claim, and is of interest when it nonetheless brings a novel perspective.

4. An equipollential deduction's conclusion is simple in the sense of neither adding to, nor subtracting from, what the premisses claim, and is of interest when it nonetheless brings a complex or nontrivial perspective.

Deduction is so defined that its conclusions are true if its premisses are true, while other modes of inference lack that character; this has been seen as a problem for the other modes. In fact any inference depends on being correctable in its premisses and procedure through larger inquiry and the testability of conclusions. Certainly surmise to an explanation is the least secure mode of inference, but also the most expeditious, as Peirce pointed out. Now a valid deduction secures its conclusion's truth if its premisses are true, while other inference modes only suggest, with more or less strength, their conclusions' truth, and do so by the perspective they bring and other considerations (methods of sampling, weakness of alternate explanations, etc.) rather than only by general definitive entailment structure. But a deduction's conditional assurance of a true conclusion is counterbalanced by the perspective of novelty or nontriviality that a worthwhile deduction brings; dubitability is natural, and right in a way, in deduction. The deduction's conclusional perspectives (novelty, nontriviality) that, with varying strength, incline one to check one's reasonings and claims for falsity, inconsistency, etc., are the same ones that make deduction worthwhile.

1. A surmise's natural plausibility, and
2. an induction's Peircean verisimilitude,
suggest that a conclusion might be true although non-deductive, and offer the hope of helping inquiry toward eventually finding (respectively) the real natural simplicities and the real reliabilities in things.

3. A 'forward-only' deduction's novelty of aspect, and
4. an equipollential deduction's nontriviality,
suggest that a conclusion might be false despite being or seeming deductive (involving an error perhaps in the reasoning or perhaps in the premisses — indeed, the main purpose of some deductions is to expose the premisses to testing), and offer the hope of helping inquiry toward eventually finding (respectively) the real surprising ramifications and the real deep structures in things.

Correlation between quasi-modal deductive topics and inference modes' valuable aspects (plausibility, verisimilitude, novelty, nontriviality)

Quasi-modal topics of so-called 'applied' but abstract and significant mathematically deductive areas (below). Aspects with merit in inference (below).
  Valuable, fruitful aspects or perspectives into which conclusions put premisses are sorted below by inference mode.
1. Optima & feasibles.
  • Differences (distances) with directions.
  • Mathematics of optimization (longer known as linear & nonlinear programming) applies extremization, calculus of variations, Morse theory.
Naturalness, simplicity, plausibility, viability, cogency.
Suggested technical name: viatility.
  • Pertains to surmise, explanatory hypothesis, that which Peirce called "abductive inference." I propose a (non-Peircean) definition of surmise as inference that automatically preserves neither truth nor falsity (the premisses don't deductively imply the conclusions, and the conclusions don't deductively imply the premisses). Even if one excludes cases where the premiss set is inconsistent with the conclusion set (e.g., "there is a horse, so there isn't a horse"), natural simplicity, cogency, etc., remain an important consideration in the surmises that remain. The inference "There is a horse here, so there is a squid here" is a surmise without natural simplicity or cogency. Now, one could restate a cogent surmise to make it seem, technically, an induction, its premisses implied by but not implying its full conclusions: "Normally after rain the lawn is wet and this morning the lawn is wet. So, normally after rain the lawn is wet, this morning the lawn is wet, and last night it rained." But this is like turning a traditional syllogism into an equipollential deduction by restating all the premisses in the conclusion. Such representation of the inference does not faithfully enough reflect the guiding inquiry interest of the inference and instead counterproductively obscures it with redundancies.
  • Studies of concrete phenomena (that is, sciences and studies physical, material, biological, and human / social) are for drawing conclusions that typically are (cogent) surmises (but they do apply the other modes of inference along the way). As Peirce points out in discussing verisimilitude in induction (Collected Papers Vol. 2, in Paragraph 633): "Strictly speaking, matters of fact never can be demonstrably proved, since it will always remain conceivable that there should be some mistake about it. [....] Indeed, I cannot specify any date on which any certain person informed me I had been born there; and it certainly would have been easy to deceive me in the matter had there been any serious reason for doing so; and how can I be so sure as I surely am that no such reason did exist? It would be a theory without plausibility; that is all." Hence, even a verisimilar and high-confidence induction, if it is from concrete individual cases, rests in the end on premisses which are affirmed because their falsity is implausible, or so taken, and which amount in the end to cogent surmises. Hence the ultimate standing of conclusions sought by studies of concrete phenomena is that of cogent surmise. Now, I've cited Peirce saying something that I believe helps imply my conclusion; but in or around the same year (1910), he held that all determination of probabilities of actual objects (such as actual dice) rests on verisimilitudes (Collected Papers Vol. 8, in Paragraph 224) and did not mention a dependence of those verisimilitudes on plausibilities (or cogencies).
2. Probabilities.
  • Ratios (such that 0 ≤ the ratio ≤ 1).
  • Mathematics of probability applies general measure theory, enumerative combinatorics.
Verisimilitude, likelihood, in C.S. Peirce's sense.
Suggested technical name: veteratility.
  • Pertains to induction, inference that automatically preserves falsity but not truth (the premisses don't deductively imply the conclusions, but the conclusions deductively imply the premisses).
      Verisimilitude in Peirce's sense consists in that, if pertinent further data were to continue, until complete, to have the same character as the data supporting the conclusion, the conclusion would be proven true. The phrase "inductive generalization" is an approximate way of saying "induction with some verisimilitude." The inference "there is a horse here, so there are a horse and a squid here" is an induction without verisimilitude. I would add that the idea of verisimilitude should also involve the idea of not being too influenced by a premiss that seems to express an outlier or random fluctuation.
  • Studies of positive phenomena in general (studies such as inverse optimization, statistics, information theory's induction-oriented areas, and maybe philosophy) are for drawing conclusions typically inductive.
3. Information, 'news'.
  • Logarithms.
  • Mathematics (in particular, algebra) of information applies abstract algebra (i.e., abstract theory of calculation). (Such algebra reveals non-Shannon-type laws of information, see tutorial by Yeung.)
New aspect, elucidativeness, significance in a sense.
Suggested technical name: novatility.
  • Pertains to 'forward-only' deduction, inference that automatically preserves truth but not falsity (the premisses deductively imply the conclusions, but the conclusions do not deductively imply the premisses). The inference "There are a horse and a squid here, so there is a horse here" is a 'forward-only' deduction without novelty of aspect. The inference "All A is B, and all B is C, so all A is C" is a 'forward-only' deduction with some novelty of aspect.
      Novelty of aspect of the kind sought in 'forward-only' deduction involves some tightening of focus, which is lacking in such a 'forward-only' deduction as "Socrates is yonder, so Socrates is here or yonder." (Still, it is important sometimes to remember how "surprisingly" broad alternatives are deductively implied — especially on those occasions when one is surprised by a subtle role of such implications in one's reasoning.)
  • 'Applied' but abstract and significant mathematically deductive areas — mathematics of optimization, of probability, of information, and of logic — are for drawing conclusions that typically are 'forward-only' deductions. I don't say that equivalences, e.g., that between p and T→p, are unimportant in those fields. But their general aim at 'forward-only' deduction is why they are correlated with areas of inductive research as their "inverses" or "reverses." Reverse pure mathematics, on the other hand, is, if not pure mathematics per se, still mathematical logic about pure mathematics; it would not be the first research program to originate in mathematical logic and then grow into a part of pure mathematics, if some old accounts that I recall about non-standard analysis are correct. My understanding is that probability theory is also often applied in pure mathematics, and not always trivially. These so-called 'applied' areas need another generic name.
4. Givens (data, etc.) as nullary, unary, binary, n-ary, complexuses. (See Arity.)
  • Roots, bases, as arities (a.k.a. adicities, valences) of relations, allied with other relational properties, e.g., transitivity.
  • Mathematical logic applies order theory.
Nontriviality, depth, complexity, 'lessonfulness'.
Suggested technical name: basatility.
  • Pertains to equipollential deduction, inference that automatically preserves truth and falsity alike (the premisses deductively imply the conclusions and vice versa). The inference "There are a horse and squid here, so there are a horse and a squid here" is an equipollential deduction without nontriviality. The inference "3 × 5 = 15, so 3 ∕ 15 = 5" is an equipollential deduction with at least a jot of nontriviality. When the well-orderedness of the pertinent set is a standing given, then the mathematical-induction step, from the ancestral case conjoined with the hereditary case, to the conclusion, is an equipollential deduction.
  • 'Pure' mathematics is for drawing conclusions typically deductive through equivalences and equipollencies. 'Forward-only' deduction sometimes occurs, especially when greater-than or less-than statements are involved. I've read that, because of this, in a mathematical induction a premiss (such as the ancestral case) is not always to be proven by simply reversing the order of one's scratch work; sometimes instead one must find some other way back.

The complexity theorist Cosma Shalizi (in his notes "Complexity" and "Complexity Measures") has said that complexity is ill-defined and its proposed general measures not actually useful. Now, the idea of complexity seems based on the idea of nontriviality in mathematics ("deep" in mathematics means "very nontrivial," I've been told), and the effort at quantifying complexity seems aimed at finding an information-like quantity. Yet, the idea of information as a quantity is rather simple. Now, one might argue that the analogous idea for complexity will be complex because it's about COMPLEXITY, of course. But that is to say that it is not a quantity on a mathematical par with information in the sense that probability is. Complexity, at least in that which seems the idea's core sense of nontriviality and depth as in mathematics, seems akin in various ways, including its importance and tantalizing character, to aspectual novelty, verisimilitude (in C. S. Peirce's sense), and plausibility (natural simplicity). So classing it, one ends up with four kindred aspects in inference, aspects that resist computation-friendly formulations but are vital as forms of value or merit in inference. Data or, more generally, givens, in their character of adicity or arity, with its definabilities and its own share of importance, seem a better "complexity" counterpart to information. For logic, there are, first of all, the structures of alternatives (such as that represented by the variable x) and of compounds conjunctive, conditional, etc., and such is the turn of interest that groups mathematics of logic with those of information, probability, and optimization. Now, when each element of such a compound or relation is itself a relation with the same arity, the arity is like a root or base raised to successive powers, though its sheer size is not the only point. Anyway, logic gains stature as a serious subject, as Quine pointed out, when it comes to the study of relative terms (in polyadic quantification), which have arities of their own; relative terms are a complicating factor in logic. Now, such a relation as the dyadic '__discussing__' is itself not a compound conjunctive, disjunctive, or otherwise; nor are relations such as pure mathematical operations, functions, etc.; yet the forms of such relations are what is applied to represent the forms of alternatives and other such compounds which, for their part, are in a sense (be it literal or figurative) relations among worlds.

Logical quantity & research scopes - universal, general, special, particular, individual, singular

November 18, 2013.

Latest significant edit: November 21, 2015.

Some logical quantities, such as the general and the singular, pertain mainly to terms or their objects, rather than mainly to propositions, and are the occasion for the perennial argument between nominalists and realists over the problem of universals. My treatment of such logical quantities differs from tradition. I've discussed these matters in older posts. Here I will adopt some different terminology (and I've changed it again since writing this post). I will discuss: uniformity for logical quantities; definitions; the conjunctive compounds and correlated areas of research; philosophical tradition; the arts and still other areas of knowledge; and C. S. Peirce.

Uniformity for logical quantities

Now, a singular is usually taken as monadic, that is, Socrates is singular, but the dyadic Socrates, Aristotle are taken as singular separately, not also polyadically in the sense of singular, singular, polyadically such that one might want to call them "multi-singular" or some such (even if the polyad is just Socrates, Socrates). So, with some etymological defensibility, I coin the word "ingular" to refer alike to the monadic singular and to singulars taken polyadically. Unlike the idea of the singular, the idea of the ingular is on the same footing as the idea of the general in having and evoking polyadic versions as well as monadic versions. (An example of a polyadic general is the two-place (i.e., dyadic) general "__discussing__.") Such sameness of footing is desirable when one seeks to be systematic, and this pays off for example in the case of the consideration of a universe or total population and its more-or-less collective description (its parameter set) as ingulars that are also fully universal (in that universe of discourse). Traditional singularity conjoined with such full universality evokes merely a one-object universe, but that conjoint vista seems mostly barren only because the window is narrowed to the monadicity of the traditional singular. Broaden the singular into the less-restrictive ingular, and said conjoint vista broadens itself into at least a rudimentary version of the populous subject matter proper to deductive mathematics of optimization, of probability, of information, and of logic.


It seems best for the elementary definitions to be in a monadic and de facto perspective where formal considerations to accommodate polyads don't come into play (although I'll still use the term "ingular" which applies to monads as well as polyads). Additionally, I coin the word "omnial" to take the place of the informal and potentially misleading phrase "fully universal."

Suppose that there is something glad.
Question A: Is there something else glad? If yes, then I call 'glad' general. If no, then I call 'glad' ingular.
Question B: Is there also something non-glad? If yes, then I call 'glad' special. If no, then I call 'glad' omnial.

The above supposition that there is something glad (or whatever) is not axiomatic but merely a hypothetical condition, so such a thing as is glad (or whatever) may completely lack instances and thus be both a de facto non-general and a de facto non-ingular, even though the general and the ingular seem each the other's negative. The same goes for the special and the omnial.

Questions A and B may seem excessively simple. For example, one thinks of the general term not just as actually true of something else, but as potentially or purportively true of something else, indeed of various other things, perhaps indefinitely many. Still, it seems best to keep the elementary definitions crude but refinable to suit the occasion, and to remember them in that light.

Now, Question A ("Is there something else glad?") and Question B ("Is there also something non-glad?") do not depend on each other at all. The four answers can be conjoined without contradiction in four ways.

Simple & conjoined positive logical quantities for terms or objects
General: 1. General-cum-omnial.
Simple example: two (things among many).
3. General-cum-special.
Examples in practical contexts: blue, resilient, melodious, etc.

Ingular (monadic singular, polyadized singulars, etc.):2. Ingular-cum-omnial.
Gamut, universe of discourse, total population, its parameters.
4. Ingular-cum-special. Monadic, polyadized, etc., singular(s) in a larger world.

The ingular-cum-omnial, when monadic, is the object in a single-object universe, but is polyadic for a larger universe and can be much less boring then.

The conjunctive compounds and correlated areas of research

Now, I have coined some further terms for brevity. I hope that I haven't erred in making trade-offs between conventionality and evocativeness of the coinages.

1. The general-cum-omnial, or etceteral.

The general-cum-omnial is that omnial which, given a monadic or polyadic instance, is also instantiated by further monads or polyads that don't share all the same members. Roughly speaking, it's the fully-universal that is not the whole universe at once. Consider in a first-order logical sense the idea of two such that "two" is true collectively of any x⁠y such that x is not y, and consider the case where, besides x⁠y, there are also z, w,..., etc., that are not x or y and are distinct from one another. Now, this predicate version of number won't get us to Peano arithmetic but, analogously, first-order logic's singular predicate or subject doesn't get us to empirical science; I'm just trying to treat the various logical quantities on the same common elementary level. The term "two" will be true of everything in that universe, each thing not monadically but instead in some combination or other, and indeed in every dyad of distinct things and in every polyad of just a one and an other, be they mentioned soever many times under soever many designations (unless it is mentions or designations that are being counted as objects themselves). It does not depend on particular positive qualities or characters of things, or on distributions of such qualities, and it does not depend on the positive thisness or haecceities of things (e.g., it doesn't matter if one is talking about Socrates and Aristotle, only that one is talking about two distinct objects). For any given non-zero whole number, this works in any sufficiently large universe. The perspective is that of two things (or three things, etc.) such that there is still another two of things (or three of things, etc.), or indeed indefinitely many twos, threes, etc. It's an idea of universality combined with an idea of further instances, even unto "infinity, or the miraculous jar of mathematics." Hence my coinage "etceteral." Particularly natural expressions of the general-cum-omnial, a.k.a. the etceteral, are mathematical operations and (lambda) functions, as well as mathematical one-to-many relations and many-to-many relations. Now, one often thinks of The Number Two, etc., as abstract singulars rather than as etceterals. One may think of it as a collection of two units or as the class of all sets of two elements, or in some other way (not my expertise). By abstraction and imaginational machineries such as set theory one revives the logical-quantitative variegation of overall experience, and gets numbers like Zero besides.

2. The ingular-cum-omnial, or solipsular.

The ingular-cum-omnial is the logical quantity of a single object in, and only in, a one-object universe. A more populous case would be that of the universe of a plinker's notes c⁠d⁠e⁠f⁠g⁠a⁠b, with an idea of ceteris paribus, "the rest staying the same," the rest coarse-grained out, summed over, to the extreme of ceteris non existentibus, i.e., the rest (of one's world) not existing. (I got this idea of total populations and universes of discourse as just various ways of coarse-graining the same grand world from somewhere in The Quark and the Jaguar by Murray Gell-Mann. The ingular-cum-omnial generally seems to me the logical quantity with a tinge of The Twilight Zone — which in turn is a reminder that a universe of discourse can't be a mere coarse-graining of the real if that universe harbors fictional elements.) So this is the logical quantity for a total population's or universe-of-discourse's members taken polyadically and more-or-less collectively, and quite collectively when one considers probabilities. Hence my coinage "solipsular." One could consider the ingular-cum-omnial, a.k.a. the solipsular, that specifies sequence, the kind that does not (e.g., "The Three" in a three-object universe), and mixed cases. A frequential distribution (such as '30% of the total population') of a characteristic across a total population is a solipsular. Natural expressions of the solipsular include information as a quantity, probabilities, and feasibility and optimality as mathematically studied. An abstract total population's members will not usually be spelt out with singular constant designations, but, if spelt out at all, then with singular dummy letters perhaps regarded as singular veiled constants, or with variables; one could have an ingular term that looks monadic but is to be construed as polyadic, but probably people prefer to express these things with sets. The solipsular is a total population and also parameters of attribution or distribution of characters, said parameters as belonging to the total population. It lends itself to abstraction and formalization such that the particular qualities distributed do not matter, but only their samenesses and differences, as well as the samenesses and distinctnesses among individuals, and not their 'thisness' or haecceities, their individual "identities" in the sense of everyday English.

3. The general-cum-special, or transcernal.

This is the logical quantity most natural for monadic & polyadic positive qualities and characters of things, which are such that we expect further and even indefinitely many instances and also some and even indefinitely many counter-instances. Hence my coinage "transcernal," evoking an idea of sifting through. This is the perspective of inductive fields like inverse optimization, statistics, information theory's inductive areas, and (I think) philosophy, concerned with positive phenomena in general but not, except in applications, with individual positive phenomena in their thisness or haecceities.

4. The ingular-cum-special, or obstular.

This is the logical quantity most natural for concrete individuals taken monadically and polyadically, but not as a total population in the abstract with ceteris non existentibus. This is the usual sense about individuals and singulars — that they're not only individual or singular, but also not absolutely alone, as if each one or each handful were a universe unto itself. Instead they're individuals in a larger world. Hence my coinage "obstular." It's the perspective of sciences and studies of concrete phenomena, what C. S. Peirce called "idioscopy, or the special sciences." Aristotle said that there is no epistêmê (often translated as "science") of the individual, but by epistêmê he meant something deductive, or nearly so, and not including concrete experimentation. The subject matter of idioscopy is concrete individuals in the larger concrete world, but the objective is to learn about their individual connections and tapestries, their positive qualities and characters, their parameters and laws, and their applicable mathematics.

Simple & conjoined positive logical quantities for terms or objects.
Researches correlated thereto by scope.
General:1. General-cum-omnial, i.e.:
Subject matter of pure mathematics — fields for deductive conclusions typically through equivalences & equipollencies.
3. General-cum-special, i.e.:
Subject matter of studies of positive phenomena in general: inverse optimization, statistics, information theory's inductive areas, and (I think) philosophy — fields for inductive conclusions from parts or samples to larger wholes.

Ingular (monadic singular, polyadized singulars, etc.):2. Ingular-cum-omnial, i.e.:
Subject matter of so-called "applied" yet abstract and significant maths drawing from ideas of total populations, universes of discourse, etc.: deductive mathematics of optimization, of probability, of information, and of logic — fields for deductive conclusions typically "forward-only," from wholes to parts or particular cases.
4. Ingular-cum-special, i.e.:
Subject matter of sciences & studies of concrete phenomena: sciences & studies of forces & motion, matter, life, and mind — fields for (soever cogent) hypothetical conclusions.

In discussing pure mathematics, I said, "By abstraction and imaginational machineries such as set theory one revives the logical-quantitative variegation of overall experience...." I'll go out on a limb here (I'm no mathematician) to give examples, not of how, for example, some mathematical ideas are, in their way, obviously more general than others, but of how logical-quantitative properties in mathematics are associable with systematic purposes analogous to those outside mathematics. Ordinals are like singular obstulars for systematically capturing, if not concrete thisness, still a kind of "whichness." Think of series and summability, theory of limits, structures of order, conditions for mathematical induction. A function's derivative, and an arithmetical calculation's result, are like transcernals for systematically classifying together the various functions or various sets of numbers or letters that result in them. Combinatorial enumeration and mathematical integration are like solipsulars for systematically analyzing a number or an area into its constituents. Topological forms and graph-theoretical graphs are like etceterals for systematically determining traversals and transformations. Well, that last assertion is a bit too vague, but I hope to improve it in time.

Philosophical tradition

(The sections on tradition were originally near this post's start but I reorganized the post so as better to 'cut to the chase'.)

Of the logical quantities exhibited by things as represented by terms, only two sorts have been regarded as noteworthy by most philosophers, and the two have gone under at least two pairs of labels: universals & particulars; and generals & singulars. (C. S. Peirce emphasized three such logical quantities.) By "a universal" or "a general" is usually meant, by philosophers, a thing of which there are AT LEAST TWO INSTANCES (actually or, for some philosophers, at least potentially), and often indefinitely many instances. By "a singular" is meant a term that has or is defined to have just one object, or a thing that has accidentally or oftener intrinsically JUST ONE INSTANCE (if any at all). Sometimes such a thing is called "an individual" or "a particular" (for example by E. J. Lowe, who classes as particulars not only individual substantial objects but also individual monadic and relational property-instances, a.k.a. tropes).

Despite perennial philosophical attention to the problem of universals — the question, disputed between realists and nominalists, of whether universals (a.k.a. generals) are real or merely verbal ("nominal") — philosophers have mostly ignored the structure of such logical quantities. The terminology is threadbare.

The tradition's edge and beyond

The idea of something universal to simply everything is involved in the idea of being itself, also in such tautologous ideas as known-or-unknown, and in the Scholastic idea of the transcendentals of being (unity, truth, goodness). The Aristotelian categories are sometimes regarded as summa genera, highest genera.

Now, a quality such as blue is typically regarded as a universal but not as being fully universal, universal to everything; one wouldn't expect any positive quality to belong to everything. Rudimentary ideas of one and two do seem fully universal to everything monadically or polyadically, in the sense that anything x is one, and any x⁠y such that x is not y are two, and so on. Of course "three" is not true of two things per se but it is true of them in combination with any still other thing. I think that that is a viable idea of reasonably full universality, even if it is not the fullest imaginable universality, and that it is more fruitful in that, unlike the fullest imaginable, it is populous, indeed infinitely so, with non-equivalent examples. Such ideas as two and three do not depend on things' positive qualities, much less on things' being any individual this such as Socrates or Bucephalus, but only on their selfsamenesses and distinctnesses, which are abstractibles that pour themselves into formalization in ways that individual and qualitative positive phenomena do not. So one has notions of the (reasonably) fully universal and of the not-fully universal but special like blue and Socrates.

What about the affective arts (those of music, dance, sculpture, drawing, painting, language, story, theater, cinema, etc.), and still other kinds of knowledge?

Here I seem to have used up the logical quantities as perspectives, scopes of subject matter, just to map, so to speak, the main classes of more-or-less theoretical research. So maybe one should do likewise for each of the other knowledge disciplines (such as the affective arts), but that seems a daunting task. Still, I'd say that there seems:

A. across theoretically-oriented research, an overall tendency, a kind of overall urge, toward etceteralities where individual cases or facts harbor enhanced lessons.

Maybe the other disciplines of (fallibilistic) knowledge share the overall tendency toward the etceteral, yet vary insofar as they are cognitive disciplines not of cognitive bases but of decisional impetuses, competential means, and affective effects. I am unsure about this. But at any rate, there seem:

B. in the affective arts, some overall tendency toward the solipsular, the selective composing, the selective combining, of things, modes, etc., into worlds in which qualities harbor enhanced values;
C. in the productive arts/sciences (know-how — engineering, etc.), some overall tendency toward transcernalities where domains harbor enhanced reliabilities, norms; and
D. in the so-called ruling arts (design, architecture, community planning, education of personal character, etc.), some overall tendency toward obstularities where regimes (rules and constraints, in some sense) harbor enhanced optima, with the obstulars concretely (multi-)individualized and customized into the larger world or even, in some sense, imposed upon or against the larger world.

But I probably should save such talk for my Speculation Lounge blog.

Let me note a still bigger picture, where questions of correlations to logical quantities may arise. The above are areas with a kind of upper or second-order level on which the prevailing element — not the only element, but the prevailing one — is that of

A. cognitively assessing (i.e., as to lessons) — and (fallibilistic) knowledge (to wit: ruling art is knowing from what impetuses one decides things; know-how is knowing by what means one achieves things; affective art is knowing in what effects one affectively feels things; math, science, etc., is knowing on what bases one knows things; I use "knowing" vaguely here; often it's better to say and think "(cognitively) learning" instead, and "coming to feel" instead of "feeling," etc.);

but the aforementioned bigger picture also includes areas with upper prevailing (though not exclusive) elements of:

B. valuing, appraising — and sensibility, appreciation, devotion (including but not limited to religion);
C. practice — and skill; and
D. struggle / striving — and (volitional / conational) strength, dedication (in conflict, competition, rivalry, dispute).

For a big table, see "A periodic table of aspects of humanity." For associated methods of learning (cognitively and otherwise), see "Methods of active learning by basic faculties" (at The Tetrast2: Speculation Lounge).

Charles Sanders Peirce

The collective STRUCTURE of such logical quantities as the singular and the general has been barely studied in philosophy except, as far as I know, by C. S. Peirce. Yet philosophy perennially pursues the problem of universals, the question of what sort of being or reality belongs or can belong to that which is not a concrete individual object (where, again, 'universal' refers to that which characterizes more than one thing, at least two things, and, in some contexts, possibly indefinitely many things). The terminology has varied: "universals and particulars," "generals and singulars," and so on. Among major philosophers as far as I know, only Peirce has introduced a more-than-two-way distinction, for which I don't know where to send the reader for a brief sketch, so I will supply one here. He made a three-way distinction, a trichotomy, of:
  (1) the vague, the indefinite, such as a quality as contemplated without reaction or reflection,
  (2) the individual, determinate, and
  (3) the general.
Said trichotomy
(A) is based by him in his three respective phenomenological categories:
  (1) Firstness, quality of feeling (more as quality of a sensation than of an affect such as pleasure or pain), essentially monadic,
  (2) Secondness, reaction/resistance, essentially dyadic (individuals, brute facts, etc.), and
  (3) Thirdness, representation/mediation, essentially triadic (rules, habits, norms, dispositions, etc.); 
(B) reflects three traditional affirmative logical quantities for propositions, respectively:
  (1) the existential particular (Some G is H),
  (2) the singular (This G is H), and
  (3) the hypothetical universal (All G is H). This hypotheticality (as in "each thing is, IF glad, THEN hearty") is important in Peirce, since he usually treated Thirdness as involving conditional necessities, conditional rules, etc.

Peirce made a distinction (to which he did not always adhere terminologically):
Singular individuals, or singulars for short, "occupy neither time nor space, but can only be at one point and can only be at one date" (i.e., point-instants).
General individuals, or individuals for short, do occupy time and space and "can only be in one place at one time."
(See "Questions on Reality," 1868.)

Now, Peirce defined the real as the object (the topic or subject matter, not necessarily a concrete thing) of a true proposition (whether actually expressed or not), such that anything real and any truth are what they are irrespectively of the opinions of particular minds and particular communities (fallibilism) and would be discovered by investigation if such investigation were to be pursued sufficiently (cognizabilism, opposition to radical skepticism). Thus he held that there are real generals, the objects of true general propositions. He held that this is a logical presupposition which, in its turn, metaphysics, which he held to be based on logic, not vice versa, fleshes out as robust and nontrivial. Thus Peirce was that which is called a realist in metaphysics. He was quite anti-nominalist. By "actual" on the other hand, Peirce meant the individual, the this, i.e., the concrete individual objects that nominalists take as the only real things. Thus Peirce held that rules, qualities, and individuals can be real, but rules and qualities can't be actual (strictly speaking) since they are not individual things. In particular, Peirce held that indeterminacy is real and that there is spontaneity, absolute chance.

I'll probably add to this post later.

HOME || Deductive vs. ampliative; also, repletive vs. attenuative || Plausibility, verisimilitude, novelty, nontriviality, versus optima, probabilities, information, n-ary givens || Logical quantity & research scopes [...] || Telos, entelechy, Aristotle's Four Causes, pleasure, & happiness || Compare to Aristotle, Aquinas, & Peirce. || Semiotic triad versus tetrad. || Tetrachotomies of future-oriented virtues and vices. || What of these other fours? || Fantastic Four. || Why tetrastic? || The Four Causes, their principles, special relativity, Thomistic beauty. || Logical quantities, categories of research, and categories. || Semiotics: collaterally based recognition, the proxy, and counting-as. || A periodic table of aspects of humanity [...]