**Sketcher of various interrelated fourfolds.**

# Deductive vs. ampliative; also, repletive vs. attenuative

**Latest significant edit: October 12, 2016. More revisions to come.**

Charles Sanders Peirce used the term ‘ampliative’ as equivalent to ‘non-deductive’ (as discussed at this post’s end). Deductive inference has sometimes been called *necessary* inference, because its conclusion (be it an obvious or non-obvious one) is a necessary consequence of its premisses. All other inference is ampliative, concluding in something extra in the sense of its not following necessarily from the premisses. The conclusions of deductive and ampliative inferences *don’t* or *do*, respectively, *add* something beyond that which the premisses give. But I couldn’t find generic terms for inferences wherein the conclusions *don’t* or *do*, respectively, *omit* something given by the premisses.

So, I picked out a couple of words — *repletive* and *attenuative* — that people may find handy. ‘Repletive’ ought to be pronounced re-PLEE-tiv, to rhyme with ‘depletive’ and ‘completive’ (as they ought to be pronounced). I first discussed the words in a post “Inference terminology” to peirce-l 2015-04-07. (Under “Word choices” below, I discuss why the words seem better choices than others.) The repletive-attenuative distinction mirrors the deductive-ampliative distinction and adds its own share of systematic light; it provides, I think, a single, simple way both (A) to distinguish between induction and abductive inference and (B) obviously to distinguish between reversible deduction (typical in pure mathematics) and ‘forward-only’ deduction (typical in deducing optimal and feasible solutions, probabilities, information as a quantity (newsiness, so to speak), categorical syllogistic conclusions, etc.).

Every inference is deductive or ampliative (but not both) and is repletive or attenuative (but not both). Those two alternatives (deductive versus ampliative, and repletive versus attenuative) do not depend on each other at all.

- In
*deductive*inference, the conclusion does not go beyond the premisses.

Toy examples:*p*∴*p*.*pq*∴*p*. - In
*ampliative*inference, the conclusion goes beyond the premisses.

Toy examples:*p*∴*q*.*p*∴*pq*. - In
*repletive*inference, the premisses do not go beyond the conclusion.

Toy examples:*p*∴*p*.*p*∴*pq*. - In
*attenuative*inference, the premisses go beyond the conclusion.

Toy examples:*p*∴*q*.*pq*∴*p*.

INFERENCES ↓ | PROOF-THEORETICALLY: | MODEL-THEORETICALLY: |
---|---|---|

Deductive: | The premisses entail the conclusion. | Automatically preserves truth. |

Ampliative (i.e., non-deductive): | The premisses do not entail the conclusion. | Does not automatically preserve truth. |

Repletive: | The premisses are entailed by the conclusion. | Automatically preserves falsity. |

Attenuative (i.e., non-repletive): | The premisses are not entailed by the conclusion. | Does not automatically preserve falsity. |

The entailment-related properties provide general rationales for kinds of reasoning only in conjunction with certain heuristic properties discussed further below. This is as true of deductive reasoning as of any other kind. (By ‘reasoning’ I mean more-or-less conscious, deliberately weighed inference.)

Each of the entailment-related properties has its merits or virtues, as well as drawbacks, in regard to the prospect of concluding in a truth or a falsehood:

**Deductive**inference**does not decrease**security / futility — i.e.,**does not increase**opportunity / risk.**Ampliative**inference**decreases**security / futility — i.e.,**increases**opportunity / risk — in some way.**Repletive**inference**does not increase**security / futility — i.e.,**does not decrease**opportunity / risk.**Attenuative**inference**increases**security / futility — i.e.,**decreases**opportunity / risk — in some way.

I use the phrase “in some way” above in order to allude to the fact that an inference can be both ampliative and attenuative, i.e., it can both increase risk (or whatever) in one way and decrease it in another way. (Of course elementary inference, as a topic, does not exhaust the topic of security, opportunity, etc., and their increase and interplay in inquiry generally.)

Each virtue comes with a diametrically opposed drawback. Risk managers sometimes say, “opportunity equals risk.” In that sense security, safeness, equals futility — “nothing ventured, nothing gained.” Freud made much from the fact that one tends to have less choice between pleasure and pain than between both and neither. Still, four conjunctive combinations of the above properties are possible in inference:

Inferences | Deductive: |
Ampliative (i.e., non-deductive): |
---|---|---|

Repletive: |
Reversible (i.e., equipollential or, if you like, equivalential) deduction. |
Induction, as one often thinks of it (but often not as it is actually framed*). |

Attenuative (i.e., non-repletive): |
‘Forward-only’ deduction. |
Surmise, conjecture, abductive inference (and often induction as actually framed*). |

***** Note on how induction is framed or expressed: For example, ‘⅗ of this actual sample is blue, so ⅗ of the total is blue’ would usually be considered inductive. Still, it’s not only ampliative, it’s also attenuative. The conclusion that ⅗ of the total population is blue does not entail the premiss that ⅗ of this actual sample is blue, even though one usually thinks of induction as inferring from a part to a whole including the part. See below, under “Fairly framing the inference”.

### Building a systematic view: entailment-related properties and heuristic properties.

Inferences may be worth classifying in the above four-fold manner because, if the classification works (in particular, if all induction ‘rightly framed’ is repletive as well as ampliative), then four major inference modes can be defined in a uniform ‘hard-core’ formal manner that exhausts the possibilities, by their basic internal entailment relations (or preservativeness or otherwise of truth and of falsity); meanwhile their various attempted heuristic merits — abductive plausibility (natural simplicity), inductive verisimilitude / likelihood (in C. S. Peirce’s sense: resemblance of conclusion to premisses), ‘forward-only’-deductive novelty, and equivalential-deductive nontriviality / depth — can be treated as forming a systematic class of aspects of fruitfulness or promisingness of inference, with each of them related (as the *compensatory opposite*, in a sense) to its respective inference mode’s definitive internal entailment relations. Those heuristic merits are difficult to quantify usefully or even to define exactly; yet, together with the entailment relations, they illuminatingly form a regular system in which each heuristic merit helps to overcome, so to speak, the limitations of its inference mode’s definitive entailment relations. At any rate there is a fruitful tension between the heuristic merit and the entailment relations in each inference mode.

Inferences | Deductive: |
Ampliative: |
---|---|---|

Repletive: |
‘Reversible’ deduction, e.g.: pq∴pq. Logically simple.Compensate with the nontrivial, complex, deep. |
Induction, e.g.: pq∴pqr. Newly adds claim(s).Compensate with verisimilitude (conclusion’s likeness to the old claims) . |

Attenuative: |
‘Forward-only’ deduction, e.g.: pqr∴pq. Claims less, vaguer.Compensate with novelty, by concision, of aspect. |
Abductive inference, e.g.: pq∴qr. Logically complicated.Compensate with natural simplicity (abductive plausibility) . |

Notes about the above table:

- Notice the systematic oppositions along the diagonals.
- All the heuristic merits considered here are those of aspects that conclusions give to premisses, not those of the inferring or reasoning itself. The Pythagorean theorem is considered quite deep but its proof is not considered particularly deep or nontrivial, especially in the sense of ‘difficult’ that is often enough (and understandably) allied to the idea of the nontrivial.
- Natural simplicity and verisimilitude contribute, in variable degree, to inclining the reasoner to believe or suspect that a conclusion is true, at least until it is well disconfirmed, while, in systematic contrast to them, novelty and nontriviality contribute, in variable degree, to inclining the reasoner to disbelieve or doubt that a conclusion is true, at least until it becomes well established. See the post “Plausibility, verisimilitude, novelty, nontriviality versus optima, probabilities, information, n-ary givens”.

An inference actually arising in the course of thought does not always present its premisses or form clearly. Its seeming heuristic merit (such as plausibility), its seeming mode of promise or fruitfulness, may help one decide what mode of inference it ought to be framed as instancing. It may even seem that one can have the definitions in terms of formal implication paired one-to-one with definitions in terms of heuristic function; yet, for example, deduction is not *defined* as explicating, bringing the implicit newly to light, since an inference in the form ‘*p*, ergo *p*’ is deductive but its conclusion extracts no new or nontrivial perspective from its premisses; and, again, the heuristic merits themselves resist exact definition. On the other hand, if no deduction were ever to make explicit the merely implicit, then no mind would bother with deductive reasoning. The heuristic merits deserve attention because, **in the pervasive absence of all the heuristic merits, no mind would bother with reasoning — explicit, consciously weighed inference — at all.** Deduction would lose as much in general justification and rationale as any other inference mode would. Little in general would remain of inference, conscious or unconscious, mainly such activities as remembering, and free-associative supposing, which are degenerate inferences analogously as straight lines are degenerate conics.

Inferences | Deductive: |
Ampliative: |
---|---|---|

Repletive: |
Reversible deduction, e.g.: … ∴ p, ∴ p, ∴ p, ∴ … . Fixated remembrance. |
Induction, e.g.:… ∴ p∨q, ∴ q, ∴ qr, ∴ … .Swelling expectation. |

Attenuative: |
‘Forward-only’ deduction, e.g.: … ∴ pq, ∴ q, ∴ q∨r, ∴ … . Shrinking notice. |
Abductive inference, e.g.: … ∴ p, ∴ q, ∴ r, ∴ … . Wild supposition. |

Still, unless the question of whether induction’s essential form is repletive as well as ampliative is settled in the affirmative, it is best to continue *defining* abductive inference as inference to a (more or less plausible) explanation, but one could coin a term such as ‘aliduction’ for inference both ampliative and attenuative, so that the questions become, is all abductive inference aliductive? and vice versa? (One could likewise coin ‘pluduction’ for repletive ampliative inference and ask whether all induction, rightly framed, is pluductive, and vice versa. ‘Equiduction’ and ‘minuduction’ respectively for ‘reversible’ and ‘forward-only’ deductions might offer some convenience, too.)

The Peirce scholar Nathan Houser said in “The Scent of Truth” (Semiotica 153—1/4 (2005), 455–466):

But now that abduction is taken seriously, and so much attention has turned to its examination, we ﬁnd that it is indeed a very slippery conception.

A gain from the ‘hard-core’ definitions based on entailment relations or, just as well, on truth/falsity-preservativeness, would be a non-slippery definition of abductive inference (as inference that is both ampliative and attenuative — the premisses neither entail, nor are entailed by, the conclusions). The very idea of inference by way of both-ways non-entailment evokes, appropriately enough, the notion of somewhat leaping, a guessing; for what it’s worth, it evoked that notion (dauntingly) for me before I (gratefully) read more than a few lines by Peirce on anything or heard of abductive inference. Still, the idea of abductive inference, howsoever defined, daunts or dissatisfies quite a few even when they do read Peirce.

Still, a guess — in the sense of a conjecture or surmise — is an inference insofar as it consists in acceptance of a proposition, even if but tentatively, on the basis of some proposition(s). Now, a guess *ought* to be a bit of a leap, out of a box so to speak, just as a deductive conclusion *ought* to be technically redundant, staying in a box. They are simply different trade-offs between opportunity and security.

Ergo, let guessing seem guessing. Let the definition of abductive inference plainly represent the potential wildness of abductive conclusions, ANALOGOUSLY as the definition of deduction represents the technical redundancy and potential vacuity of deductive conclusions.

Let the potential wildness of abductive conclusions be seen as counterbalanced by the practice, discussed richly by Peirce and exemplifiable in various particular forms, of finding plausibility (natural simplicity), along with conceivably testable implications, *analogously* as the technical redundancy and potential vacuity of deductive conclusions are seen as counterbalanced by the practice, exemplifiable in various particular forms, of finding a new or nontrivial aspect, also conceivable further testability. Analogous remarks can be made about induction, verisimilitude, and testability.

So defined as both ampliative and attenuative, and distinguished as attenuative from induction as repletive, abductive inference would plainly have the *autonomy* that Tomis Kapitan found lacking (in “Peirce and the Autonomy of Abductive Inference” (PDF), Erkenntnis 37 (1992), pages 1–26). In other words, abductive inference would *not* boil down, amid such analysis, to some specialization of deduction or induction. Ideas about natural simplicity, explanatory power, pursuit-worthiness, etc., which contribute to the current slipperiness of *definitions* of abductive inference, would instead be *further* salient issues of abductive inference, neither explicitly contemplated in its definition nor incorporated into the content of all abductive inferences (which incorporation, besides the problems that Kapitan finds, would make one abductive inference into many, just by people’s differing soever fuzzily in the amounts of plausibility, economy, pursuit-worthiness, etc., that they assert in it), just as the somewhat slippery ideas of novelty, nontriviality, predictive power, etc., are *further* salient issues of deduction, neither explicitly contemplated in its standard definitions nor incorporated into the content of all deductions (and such couldn’t usefully be done deductively). Such spartanism at the elementary level need not and ought not go so far as to forbid qualifying the illative relation by saying ‘therefore, abductively,’ or ‘therefore, deductively,’ or the like.

Yet, some slipperiness remains, which the proposed definitions of the inference modes do not entirely remedy. I will take this up in the section “Fairly framing the inference”.

### Fields that aim toward reversible deduction, ‘forward-only’ deduction, induction, and abductive inference.

The highest order of the imaginative intellect is always pre-eminently mathematical, or analytical; and the converse of this proposition is equally true.

— E. A. Poe, “American Poetry”, 1845, first paragraph, link to text.

Reciprocation of premisses and conclusion is more frequent in mathematics, because mathematics takes definitions, but never an accident, for its premisses — a second characteristic distinguishing mathematical reasoning from dialectical disputations.

— Aristotle, Posterior Analytics, Bk. 1, Ch. 12, link to text.

Pure mathematics is marked by far-reaching networks of bridges of equivalences between sometimes the most disparate-seeming things. With good reason, popularizations often focus on examples of the metamorphosic power of mathematics. A topologist once told me that the statement ‘These two statements are equivalent’ is itself one of the most common statements in mathematics. In a mundane example of a bridge by equivalence, in mathematical induction (actually a kind of deduction), one takes a thesis that is to be proved, and transforms it (in a fairly simple step) into the ancestral case and the heredity, conjoined. Once they’ve been separately proved (such is the hard part, also, I’ve read, often done by equivalential deductions), then the mathematical induction itself, the induction step, consists in transforming (again, in a simple step) the conjunction of ancestral case with heredity back into the thesis, demonstrating the thesis. The reasoning in pure mathematics tends to be transformative, from one proposition (or compound) to another proposition equivalent to it and already proved or postulated, or just easier or more promising to work with for the purpose at hand. When one’s scratch work proceeds through equivalences from a thesis to postulates or established theorems, then one can simply reverse the order of the scratch work for the proof of the thesis. Reverse mathematics, a project born in mathematical logic, takes up the question of just which mathematical theorems entail which sets of postulates as premisses. This shows again the prominence of deduction through equivalences in pure mathematics; the **reverse** of the reasoning in pure mathematics is typically still reasoning by pure mathematics (even if with inquisitive guidance from mathematical logic).

In an example contrasting to that, deduction of probabilities and statistical induction, two neighborly forms of quite different modes of inference, are seen as **each other’s** reverse or inverse, deduction of probabilities inferring (through ‘forward-only’ deduction) from a total population’s parameters to particular cases, and statistical induction inferring in the opposite direction (e.g., in Devore’s Probability and Statistics for Engineering and the Sciences, 8th Edition, 2011, beginning around “inverse manner” on page 5, into page 6). Such deductive fields as probability theory seem to involve the development of applications of pure mathematics in order to address ‘forward problems’ in general, the problems of deducing solutions, predicting data, etc. from the given parameters of a universe of discourse, a total population, etc., with special attention to structures of alternatives and of implications. That description fits the deductive mathematics of optimization, probability (and uncertainty in Zadeh’s sense), and information (including algebra of information), and at least some of mathematical logic.

Now, inferential statistics should not be nicknamed ‘inverse probability’, an obsolete phrase that comes from DeMorgan’s discussion of LaPlace and refers to a more specific idea, involving the method of Bayesian probability. On the other hand, the inverse of mathematics of optimization actually goes by such names as inverse optimization and inverse variations. On a third hand, inverse problem theory seems to concern inferring from observed effects to unobserved causes governed by known rules, and this seems a kind of abductive inference, albeit with a special emphasis on knowing the governing rules pretty comprehensively.

It is in the (comparatively) concrete sciences, the sciences of motion, matter, life, and people, that abductive inference takes center stage. I’ll add some discussion here later.

### Fairly framing the inference.

#### Abductive inference and statistical syllogism.

‘It has rained every day for a week, ergo tomorrow it will rain.’ So framed as an argument, that inference is both ampliative and attenuative, hence abductive. But it is just as natural to frame the thought as being, that it has rained every day for a week and that ergo tomorrow it will rain *again*, for the eighth consecutive day, etc., at which point the inference is framed as inductive; that seems much of its spirit. It is also easily restated as a kind of statistical syllogism, that is, as a statistical induction to a premiss for an attenuative deduction — in this case, an induction from seven consecutive rainy days as of today to eight consecutive rainy days as of tomorrow, followed by an attenuative deduction (from that inductive conclusion) to a rainy day tomorrow, period. The restatement does justice to both the expansiveness and the narrowing of focus of the original inference, by framing them separately in component inferences. When an inference, seemingly in a given mode, is so easily analyzed, reduced, into worthwhile component inferences in other modes, then it seems fairer to regard it as basically such a composition. In this case, although the deduction is but weakly elucidative, merely repeating an explicit claim from among a crowd of others in the induction’s conclusion, the induction itself is more intellectually worthwhile, and the statistical syllogism depends for its promising aspect largely on the component induction’s verisimilitude (resemblance of conclusion to premisses).

Now, the famous example of abductive inference — ‘Whenever it rains at night, the lawn is wet the next morning; the lawn is wet this morning; ergo it rained last night’ — could likewise be reframed as an induction (I mean an ampliative repletive inference) followed by an attenuative deduction:

Whenever it rains at night, the lawn is wet the next morning; the lawn is wet this morning;

ergo (inductively) whenever it rains at night, the lawn is wet the next morning; the lawn is wet this morning; and it rained last night;

ergo (attenuatively-deductively) it rained last night.

But the induction would have weak likelihood (verisimilitude) since the premisses state how often a morning’s wet lawn follows a night’s rain, but not how often a night's rain precedes a morning's wet lawn. It is weak because it is doing mainly the work of the abductive jump. Such a weak induction, followed by a deduction that merely repeats an explicit claim from the induction’s conclusion, is not worth stating except in order to expose its weak likelihood; except for that, it seems better to cut to the chase, as they say, and go straight to the conclusion in the usual abductive form.

Yet, suppose that the abductive inference is instead:

Whenever the lawn is wet in the morning, it has rained at some time the previous night. [Suppose ignorance of how often, when it rains at some time at night, the lawn is wet the next morning.]

It rained at some time last night.

Ergo (hypothetically), the lawn is wet this morning.

This is not a usual abductive inference to a cause or reason, because it is hard to see how the hypothesized circumstance that the lawn is wet this morning would explain the fact that it rained last night; instead it would confirm the observation of last night’s rain if the observation were not only odd but in doubt (which suggests reframing the hypothesis somehow in terms of a seeming and possibly mistaken observation). The removal of such doubt might or might not be the motivation of the inference. Also it has some natural simplicity in supposing the recent operation of the connection between night rain and morning lawn wetness. Reframing it as involving an induction with weak but non-negligible likelihood seems, again, a weak solution. It is a prediction, so perhaps one ought to reframe it as the deduction of a non-zero probability of a wet lawn in the morning, but one could make the previous example into a deductive retrodiction by the same means. I confess that I don’t know what to do here. Another option is to enrich the picture of how how rain leads to lawn (and more generally, land) wetness, where we can see a hydrological cycle. Let's take a better example of that kind of thing:

Whenever the volcano erupts in the morning, it has rumbled the previous night.

The volcano rumbled last night.

Ergo (hypothetically), the volcano erupts this morning.

This works somewhat better because the volcano’s eruption this morning would explain, in a sense, its rumbling last night; i.e., that rumble, if maybe not some other rumbles in the past or future, is part of a process strongly leading to an eruption as a natural end, a natural culminal stage — not an actual function or purpose, but still an end, a final cause (and, in particular, a cause for concern). Also, this abductive inference seems no less hindered than the previous one by being reframed as an induction followed by an attenuative deduction.

#### Inductive vs. abductive.

Induction as actually framed is often not only ampliative but also attenuative — the conclusions do not always entail the premisses, even though one usually thinks of induction as inferring from a part to a whole including the part.

There are differing ways to reframe the inference ‘⅗ of this actual sample is blue, so ⅗ of the total is blue’ so that its conclusion will entail its premiss, ways that are perhaps to be favored over the example if they reflect better the inquirial interest involved in induction. One could say ‘some subset’ instead of ‘this actual sample’, or characterize this actual sample in the conclusion as well as the premiss (like a concluding graph that represents the actual measurements with a darker line). Such perspectives suit checking consistency by the deducibility of the premiss from the conclusion and, applying probability calculations, deducing what would be the probability of drawing a given subset as a sample given alternate possible sets of parameters of the total population.

Arguably some inferences are framed so simplistically that they are hardly worth discussing, for example those in the form ‘Some *G* is *H*, ergo any *G* is *H*.’ It seems inductive but its conclusion does not entail its premiss; and the conclusion, among possible conclusions, is sufficiently non-superior in both verisimilitude and natural simplicity as to render it hard to decide whether it should be reframed in one way or another by including more data, so as to treat it at least as an over-expansive induction in some direction or an excessively wild abductive inference. But one does wonder about abducing, as it seems to do, to a rule. Feynman said that we find new laws first by guessing them. C. S. Peirce does at least once discuss a kind of abductive inference that concludes in a “generalization” to a new law (1903, see Essential Peirce v. 2, p. 287, passage at Commens). Peirce in earlier years described generalization as selective of the characters generalized (decreasing the comprehension while increasing the extension, see “Upon Logical Comprehension and Extension”, 1867, Collected Papers v. 2 ¶422, also in Writings v. 2, p. 84), and as casting out “sporadic” cases (“A Guess at the Riddle”, 1877–8 draft, see Essential Peirce v. 1, p. 273). I don’t think that he is merely discussing the removal of outliers, although such removal is arguably an abductive move (but separable from an ensuing induction). If, in the 1903 passage, he is discussing abductive inference by selective generalization quite generically, then any crude induction from the mass of experience seems to count as abductive instead. Perhaps the abductive generalization involves an explanation by some special hidden circumstance that needs to be generalized in order to make sense, e.g., the hypothesis of a mechanism that would need to be a law in particle physics in order to make sense at all; but it’s not clear how such a generalization automatically involves casting out some seemingly salient aspects of the surprising phenomenon to be explained. On the other hand, it does seem abductive, since it involves a new idea, not to mention a choice from among conflicting possible new ideas.

Now, recall that ‘forward-only’ deduction involves a contraction of the focus of interest; one does not re-state the premisses as part of a categorical syllogism's conclusion, even though the deduction would remain valid; and recall that induction involves an expansion of focus. Abductive inference involves both an expansion and a contraction in such a way as to exchange one focus for another, as by a bit, at least, of a leap. If the focus is not merely on what *happens* to be the case for a given larger population, but on a new rule itself, hypothetical-universal in form, decidedly not asserting positive examples but denying the existence of counter-examples, then a both ampliative and attenuative form such as ‘⅗ of this (or some) actual belt of asteroids is blue, so ⅗ of any belt of asteroids is blue (or *will* or *would be* blue even if asteroid belts were merely possible, not actual)’ might be appropriate, inferring (if not very plausibly) from the instance to the new rule — really, a new *law* — as the new *object* of interest for the time being.

#### Deductive vs. ampliative.

Now, the deductive validity of some schemata in logic, such as ‘∀*G*∴∃*G*’, depends on whether one has stipulated that the universe of discourse is non-empty. Stipulating that deductive validity shall exclude the empty universe amounts to saying, not merely ‘there exists something’, but ‘let every proposition entail that there exists something’ or, equivalently, ‘Let ‘truth’ (‘T’) be formally equivalent to ‘there exists something’ ’. In other words, the universe’s non-emptiness is taken as a matter of definition, not accident. Generally, I fret that specially stipulated rules of formal implication can lead to complications in distinguishing inference modes from one another. I haven’t seen such issues discussed in texts on classification of inference modes. 1. Maybe such issues are easily resolved. 2. Maybe it’s best to keep the elementary things elementary. 3. Maybe I’m in over my head, but I’ll continue my dive a bit further.

#### Abductive inference again.

Now, suppose that one says, ‘Let every proposition entail that, when it rains at night, the lawn is wet the next morning’. It would be, not a rule of strictly logical implication, but instead a rule of, say, meteorological implication, corresponding to a local natural law of weather. In that universe, the premiss that the lawn is wet this morning is entailed, deductively, *formally* implied, by the conclusion that it rained last night. That's a case where a typical scenario of abductive inference looks like the reverse of (attenuative) deduction, and such a character has been ascribed, by Peirce and others, to abductive inference. That view lends itself to one’s holding a premissual rule to be not just a premiss but a kind of *standing given* entailed, deductively, formally implied, by every proposition in that universe. If one still calls the resulting inference abductive, then one cannot define abductive inference strictly in terms of entailment relations, but has to resort to the comparatively slippery ideas of plausibility, aim at explanation, etc., in order to distinguish it from inductive inference. Yet, it is especially on the basis of its very aim at plausible explanation, that one would argue that one should not so frame the inference and that an involved reverse of an attenuative deduction can be adequately noted instead by saying that, in such an abductive inference, the conjunction of premissual rule and conclusion entail the premissual case that the lawn is wet this morning; in other words, the conclusion switches places with one of the premisses, not with both premisses conjoined. Yet, what if the rule is a rule of, say, special relativity? It’s difficult *not* to think of it, at least for comparatively practical purposes, as a standing given in our actual universe; special relativity is regarded as a practical certainty. There seems little if any reason not to be flexible and willing to accommodate such thinking within theoretical models, as long as it is understood that higher-level, theoretical rules chosen or tailored to reflect lower-level (e.g., empirical) rules are not actually true to the lower-level domain by mere definition or stipulation. In that case, the definitions of inference modes by entailment relations will still work at an elementary level that gives the reasoner a kind of basic compass, and one will simply need to keep in mind that allowing much freedom with the formal givens of the universe of discourse can lead to complications for the entailment-based classification of inferences. Put that way, it sounds unsurprising. Maybe I’m making too much of these complications. After all, we already have a situation in deductive logic itself where ‘∀*G*∴∃*G*’ is ampliative absent the stipulation of the universe’s non-emptiness, and deductive otherwise; nobody regards that as a deal-breaker for the ampliative-deductive distinction.

### Word choices.

**‘Repletive’** seems better than ‘retentive’ (although maybe it’s just me), because ‘retentive’ suggests not just keeping the premisses, but restraining them or the conclusions in one sense or another. The word ‘preservative’, to convey the idea of preserving the premisses into the conclusions, would lead to confusion with the more usual use of ‘preservative’ in logic’s context to pertain to truth-preservativeness (and falsity-preservativeness). If people dislike the word ‘repletive’ for the present purpose, then I suppose that ‘transervative’ would do.

**‘Attenuative’** seems much better than ‘precisive’ or ‘reductive’ for non-repletive inference. The word ‘precisive’ seems applicable only abstrusely to an apparently dis-precisive inference in the form of ‘*p*, ergo *p* or *q*’. Attenuative inference generally narrows *logical* focus, but in doing this it renders vague (i.e., omits) some of that which had been in focus. ‘Reductive’ may be less bad than ‘precisive’ in those respects but is rendered too slippery by irrelevant senses clinging from other contexts and debates.

### Semantic discussion: ‘ampliative inference’ ≡ ‘non-deductive inference’.

The question is: does the phrase ‘ampliative inference’ mean simply inference that is non-deductive (as I’ve taken it to mean), or does it mean inference that is both repletive and non-deductive?

Here are excerpts from the Century Dictionary’s definitions of ‘ampliation’ and ‘ampliative’, of which Charles Sanders Peirce had charge:

ampliation(am-pli-ā´sho̤n) […]— 3.Inlogic, such a modification of the verb of a proposition as makes the subject denote objects which without such modification it would not denote, especially things existing in the past and future. Thus, in the proposition, “Some man may be Antichrist,” the modal auxiliarymayenlarges the breadth ofman, and makes it apply to future men as well as to those who now exist.

ampliative(am´pli-ạ̄-tiv) […] Enlarging; increasing; synthetic. Applied — (a) Inlogic, to a modal expression causing an ampliation (seeampliation, 3); thus, the wordmayin “Some man may be Antichrist” is anampliativeterm. (b) In theKantian philosophy, to a judgment whose predicate is not contained in the definition of the subject: more commonly termed by Kant asyntheticjudgment. [“Ampliative judgment” in this sense is Archbishop Thomson’s translation of Kant’s wordErweiterungsurtheil, translated by Prof. Max Müller “expanding judgment.”]No subject, perhaps, in modern speculation has excited an intenser interest or more vehement controversy than Kant’s famous distinction of analytic and synthetic judgments, or, as I think they might with far less of ambiguity be denominated, explicative and

ampliativejudgments.Sir W. Hamilton.— Century Dictionary, p. 187, in Part 1: A – Appet., 1889, of Volume 1 of 6, and identically in Century Dictionary p. 187 in Volume 1 of 12, 1911 edition. The brackets around the sentence mentioning Archbishop Thomson are in the original.

Peirce for his own part focused on the deductiveness or ampliativeness of inference, not of ready-made judgments (he once said that a Kantian synthetic judgment is a “genuinely dyadic” judgment, see Collected Papers v. 1 ¶ 475). Peirce argued that mathematics aims at theorematic deductions that require experimentation with diagrams, a.k.a. schemata, and that it concerns purely hypothetical objects. (So much for Kant’s synthetic *a priori*.)

Peirce’s examples of abductive reasoning had premisses that were not only far from entailing their conclusions, but also far (too far for a fair reframing to close the gap) from being entailed by their conclusions; his “ampliative” meant simply the non-deductive, not the both repletive and non-deductive. This was both (A) during the years that he treated abductive inference as based on sampling and as a rearrangement of the Barbara syllogism, and (B) afterwards, in the 1900s. In 1883 Peirce divided “probable inference” into “deductive” and “ampliative”, the latter including hypothetical (i.e., abductive) inference (in “A Theory of Probable Inference”). In 1892, Peirce applied the term “ampliative” to inference as non-deductive as follows in “The Doctrine of Necessity Examined”, § II, 2nd paragraph:

[….] Non-deductive or ampliative inference is of three kinds: induction, hypothesis, and analogy. If there be any other modes, they must be extremely unusual and highly complicated, and may be assumed with little doubt to be of the same nature as those enumerated. For induction, hypothesis, and analogy, as far as their ampliative character goes, that is, so far as they conclude something not implied in the premisses, depend upon one principle and involve the same procedure. All are essentially inferences from sampling. [….]

(Throughout the years, he usually regarded analogy as a combination of induction and hypothetical inference.) During the 1900s, Peirce ceased holding that hypothetical (a.k.a. abductive, a.k.a. retroductive) inference aims at a *likely* conclusion from parts considered as *samples*, and argued that abductive inference aims at a *plausible*, naturally, instinctually simple explanation as (provisional) conclusion and introduces an idea new to the case, while induction merely extends to a larger whole of cases an idea already asserted in the premisses. This does not mean that only abductive inference is ampliative; instead at most it means that only abductive inference is ampliative with regard to ideas, while induction is ampliative of the extension of ideas. (I’m unsure whether Peirce regarded abductive ideas as being definable by *comprehension* a.k.a. *intension* (as opposed to *extension* a.k.a. *denotation*); in a 1902 draft, regarding his past treatment of abductive inference, Peirce wrote, “I was too much taken up in considering syllogistic forms and the doctrine of logical extension and comprehension, both of which I made more fundamental than they really are.” — Collected Papers v. 2, ¶ 102.)

*n*-ary givens || Logical quantity & research scopes [...] || Telos, entelechy, Aristotle's Four Causes, pleasure, & happiness || Compare to Aristotle, Aquinas, & Peirce. ||

. . . . |