Consider the statement “this statement is false.” Is it true or false? Some reflection will show that if it’s true, then it’s false, and if it’s false, then it’s true. What should we make of that? Is it both true and false? Or else what’s going on?

This is a classic logical problem known as the “liar paradox.” The liar paradox, and paradoxes like it, have been subject to immense philosophical attention stretching back thousands of years.

In this post I’m going to present a solution to this classic problem. In a nutshell, I say, ignore it, like almost everybody does. The length of the post is spent in making this solution rigorous and explaining the basis of my opinion that this is the best way of solving the liar paradox and similar paradoxes, for most purposes.

The workings of the solution lead to a picture of logic which is potentially surprising and new to people. In this picture, the rules of logic have exceptions, the rules of logic are subject to legitimate subjective differences of opinion, and the rules of logic are not completely rigorous and formal, but are made up as we go along to at least some extent. In the course of the work I hope to convey a deeper explanation of this picture.

I’m not trying to provide a full picture of how the practice of logic should work under the assumption that the rules of logic are not completely rigorous and formal. I’m providing a solution to paradoxes which exploits that assumption. I hope for other work which builds out a bigger picture of how logic can work when it’s not completely rigorous and formal. I have sketched only a little fragment of what I imagine to be a big space of potentially interesting problems and solutions.

**The problem**

In this post, by “paradox” I will mean, “an argument which proceeds from apparently true premises to a contradictory conclusion.” A contradictory conclusion is a statement of the form (P and (not P)), asserting that a statement P is both true and false.

Here is an example of a paradox: the liar paradox. Consider the statement “this statement is false.” I’ll refer to this statement as L. If L is true, then L is false. If L is false, then L is true. L is either true or false. Therefore L is true and L is false, or in other words, (L and (not L)) is true. The preceding argument, which I call the liar paradox, has a contradictory conclusion, so it is a paradox.

Other examples of paradoxes include Russell’s paradox and the Sorites paradox. In this post I will focus mainly on the liar paradox as a chosen example of a paradox.

This post explains a general method of solving paradoxes.

What does it mean to solve a paradox? Every paradox presents a problem: it appears to imply that something false is true, which is impossible, at least on a traditional understanding of logic. Nothing impossible can occur, but paradoxes appear to be cases where something impossible occurs. In that sense, paradoxes are intellectual, theoretical problems. To solve a paradox is to solve this problem, for example by pointing out which of the paradox’s premises are not true, or by explaining why its conclusion does not follow from its premises.

**This theory, this post, and their history**

I don’t claim the theory I’m giving is fundamentally new. I think it is basically a technical extrapolation of the common way of responding to paradoxes like the liar paradox, according to which these paradoxes can safely be ignored.

I didn’t base this theory on anybody else’s work.

I first described this post’s method of solving paradoxes in my book Winning Arguments. This post aims to describe the method in a quicker and more surveyable way, putting together thoughts that are scattered throughout a wide span of text in Winning Arguments.

This post is aimed at a general philosophical audience. It is not very technical and it doesn’t engage deeply with the scholarly literature on paradoxes. There are over 2,000 scholarly works about the liar paradox cited on PhilPapers. This post talks about one solution to the liar paradox, which is in competition with a huge number of solutions discussed in the weighty corpus of paradox research. I have kept the comparison with competing solutions brief and superficial, relative to the extent of the literature.

I studied the topic of paradoxes in the University of Connecticut Philosophy PhD program, where I learned about paradoxes among lovely and intelligent philosophers including, for example, world paradox experts Jc Beall and David Ripley. I published research related to paradoxes in two of the top journals related to the topic: the Journal of Philosophical Logic and the Review of Symbolic Logic.

The solution to paradoxes which I’m describing in this post is something I began conceiving around 2011, during my undergraduate studies at Arizona State University. I have been developing the idea since then. By the time I left UConn in 2015, I had all the pieces of the theory conceived. It took me until 2017 to put the pieces together, to believe the resulting theory, and to put it down in writing.

This post does not provide my full theory of paradoxes. It only provides the first piece of the theory. This piece of theory is called *the ad hoc method of rejecting incorrigible paradoxes, *or *the ad hoc method*, for short.

The ad hoc method deals only with the technical aspect of solving paradoxes. The other part of the theory, to come in a later post, deals with the more philosophical question of why paradoxes happen and how we can explain them.

**Summary of the ad hoc method of rejecting incorrigible paradoxes**

Some arguments have apparently true premises and contradictory conclusions, but some of their premises, though apparently true, are false. An appropriate method of solving such paradoxes is to identify which of the premises are false.

In contrast, this post is mainly concerned with *incorrigible paradoxes*. An incorrigible paradox, by definition, is an argument which leads from true premises to a contradictory conclusion via correct rules of deductive logical inference.

The notion of an incorrigible paradox is implicitly relative to: (a) some language in which arguments can be formulated, (b) some opinion about what rules of deductive logical inference are correct, and (c) some opinion about what statements are true. You will get different, sometimes irreconciliable notions of what constitutes an incorrigible paradox by starting from different choices of (a), (b), and (c).

If a paradox is not incorrigible, then either it has at least one false premise, or its conclusion does not follow logically from its premises, and solving the paradox is a matter of finding a false premise or using logical methods to demonstrate that the conclusion doesn’t follow from the premises.

If a paradox is incorrigible, then the ad hoc method of rejecting incorrigible paradoxes recommends rejecting the conclusion of the paradox and all further inferences from it, understanding this rejection to be an *ad hoc* modification/exception to the rules of logic one employs.

**Assumptions**

The notion of an incorrigible paradox is implicitly relative to some language, some opinion about what rules of logical inference are correct, and some opinion about what statements are true.

In this post, I’m going to choose English as the language in which arguments can be formulated.

I’m going to assume that the rules of classical logic, as described for example by the system LK, are correct rules of deductive logical inference.

I’m going to assume a lot of common sense about what’s true. I will assume that the following statements, which I construe as premises of the liar paradox, are true:

- The statement L = “This statement is false” is an English statement.
- If L is true, then L is false.
- If L is false, then L is true.

From these premises it follows by the rules of classical logic that L is true and L is false, and that (L and (not L)) is true.

The rules of classical logic are always described relative to some formal language, such as the language of first-order logic used in the system LK. In this post I’m assuming the rules of classical logic can be construed as rules governing use of the English language.

Since English isn’t based on a set of precise rules that cover all cases, it is practically impossible to give a precise description of the rules of classical logic as they apply to English.

When logicians apply the rules of classical logic to English, they generally rely on a precise understanding of the rules of classical logic as understood through some formal description such as the system LK, and on a more artistic understanding of how to translate between English and a formal language such as first-order logic. This mapping between English and the formal language yields an imprecise method for applying the rules of classical logic to English. This imprecise method is precise enough to be unproblematic for typical purposes, and for our purposes.

According to the assumptions of this post, an incorrigible paradox is an English-language argument whose premises are true and whose conclusion follows from the premises by the rules of classical logic. By assumptions, the liar paradox is an example of an incorrigible paradox.

The solution to paradoxes of this post will work for basically any choice of language, rules of logic, and concept of truth. The assumptions of this post, as described in this section, are primarily for the purpose of making the discussion concrete. If you dislike the assumptions chosen in this section, you can still use the solution to paradoxes I’m describing, replacing English with your language of choice, replacing classical logic with your logic of choice, and using your choice of assumptions about what statements are true.

**The ad hoc method of rejecting incorrigible paradoxes**

To solve any paradox by the method this theory prescribes, you start by determining whether or not the paradox is incorrigible. This requires answering two questions: whether the premises are true, and whether the conclusion follows logically from the premises. Answering whether the conclusion of an argument follows from the premises by the rules of classical logic is basically a routine process if you are familiar with appropriate techniques. Answering whether the premises are true is in general more complex, and I don’t provide any general method for doing that.

If you show that a paradox is not incorrigible, then you’ve solved the paradox. Either you’ve shown that its premises or not all true, or you’ve shown that the conclusion doesn’t follow logically from the premises. Either way, you’ve shown that there is no problem.

If your analysis concludes that a paradox is incorrigible, then the next step of the method is to reject the conclusion of the argument. One makes the stipulation that the paradoxical argument is to be rejected, even though its premises are true and its logical inferences are valid. This is an *ad hoc* modification/exception to the rules of logic which one employs.

One ends up having not only positive rules of logic, according to which certain inferences are correct, but also negative rules of logic, which supersede the positive rules by stating that inferences which are valid in general are invalid in some particular cases (the cases of incorrigible paradoxes).

This method leaves the rules of logic one employs perpetually incomplete. If you employ this method, then the rules of logic you follow are unlike (for example) the rules of the system LK in that you lack a precise and complete description of the rules of logic you follow. If you employ this method, then you have the expectation that if you encounter new incorrigible paradoxes in the future, then you will need to extend the rules of logic you employ, by adopting new negative rules to block the new incorrigible paradoxes.

In classical logic, it is possible to infer any statement from a contradiction. If you can infer a statement of the form (P and (not P)) from a given set of premises, then you can infer any statement whatsoever from those premises. This is called the principle of explosion.

Because of the principle of explosion, the ad hoc method of rejecting incorrigible paradoxes also involves rejecting any variants on a paradoxical argument which can (in other variations) be used to prove any statement. For every argument whose conclusion is a contradiction, there are many variations with conclusions including all statements. All such variants are rejected, in the method.

People may arrive at differing results by this method. Differences on what constitutes an incorrigible paradox may arise from different opinions on what rules of logic are correct, different opinions on what statements are true, and different judgments about what constitutes a variation on an incorrigible paradox, among other possible sources of differences. Different judgments about what constitutes a variation on an incorrigible paradox may arise because I don’t offer any general method for making such judgments; rather, I assume there to be some ineliminable element of artistry involved in making such judgments. This assumption is justified later.

These sources of potential variability in people’s results from this method may be viewed as a consideration against the use of this method. Of course, people arrive at different results in reasoning due to all kinds of causes. I view the mentioned consideration against this method as a valid one, to be weighed against other considerations (and of course I believe this method has considerations in its favor that outweigh the considerations against, for general purposes).

According to the ad hoc method of rejecting incorrigible paradoxes, one augments the rules of logic one follows on an *ad hoc* basis by adding negative rules which exclude the inferences to the conclusions of incorrigible paradoxes. One does this as needed. One also rejects all variations on incorrigible paradoxes which can be used (in further variations) to prove arbitrary statements. I don’t provide strict rules on how to draw the line between such variations on incorrigible paradoxes, and other arguments; I assume there is an ineliminable element of artistry in drawing the line. In other words, I assume that the technique of drawing the line can’t be made completely formal.

Some appropriate questions:

- How does the ad hoc method of rejecting incorrigible paradoxes compare to other ways of solving paradoxes? Why use this method over competing methods?
- What’s the reason for the assumption that there is an ineliminable element of artistry in identifying variations on incorrigible paradoxes which can be used (in further variations) to prove arbitrary statements?
- What are the risks involved in using the ad hoc method of rejecting incorrigible paradoxes?
- Is there any intuitive picture that explains why this method is correct?

Subsequent sections will speak to these questions. Here is a summary of the subsequent sections.

**Non-classical logic and its successes and limitations with paradoxes** reviews some of the successes and limitations of some competing approaches to solving paradoxes, which are based on restricting the rules of classical logic in some formally defined way, in the form of a so-called non-classical logic. This section looks (in a very high level way) at the pros and cons of approaches using non-classical logic, as compared to the ad hoc method of rejecting incorrigible paradoxes. This section speaks to questions 1, 2, and 3.

**Implications for reasoning **talks at a high level about how this approach to paradoxes works in the context of everyday practical reasoning. This section speaks to question 3.

**Implications for math **talks at a high level about how this approach to paradoxes works in the context of math. This section speaks to question 3.

I will speak to question 4 in a subsequent post.

**Non-classical logic and its successes and limitations with paradoxes**

Classical logic is the prevailing perspective on logic. Classical logic is a system of rules of inference, which exists in many variants, and which is used, taught, and studied far more often than any other paradigm of rules of logic.

A non-classical logic is any system of rules for logical reasoning which is not a form of classical logic. Non-classical logics are a huge and diverse category. PhilPapers cites over 6,000 works on non-classical logics.

An early and important type of non-classical logic is intuitionistic logic. Roughly, this is the logic you get by removing from classical logic the principle that every statement is either true or false.

Another important type of non-classical logic is relevance logic. Systems of logic known as relevance logics aim to disallow inferences known as “paradoxes of relevance,” such as the inference (valid in classical logic) from the premise “the sky is blue” to the conclusion “if the sky is red then the sky is blue.”

Neither intuitionistic logic, nor most relevance logics, solve the liar paradox and similar paradoxes.

I’ll focus on two categories of non-classical logics which get traction in solving paradoxes such as the liar paradox. These are *paraconsistent/paracomplete logics*, and *substructural logics.*

As far as I’m aware, these are the best studied categories of paradox-solving logics, with no competing categories of comparable prominence in the academic literature. Each of these categories includes many different formal systems of logic, and the literature around each of them is extensive and technical. I’ll only provide a very high-level survey of what is going on in these subfields of logic, without going into technical details about any of the systems.

For a more detailed survey of the topic of non-classical logics, I would recommend Graham Priest’s An Introduction to Non-Classical Logic. That book does not discuss substructural logics. For an entry point into the field of substructural logics, I would recommend Greg Restall’s article in the Stanford Encyclopedia of Philosophy. Now I’ll proceed to my nutshell survey of these fields.

A *paraconsistent logic* is a system of logic in which some statements can be both true and false, without every statement being true, and without every statement being false.

An example of a paraconsistent logic is LP, the Logic of Paradox. LP is a system of logic in which there are three possible “truth values:” true, false, and paradoxical (meaning both true and false). Every statement is either true, false, or paradoxical.

LP provides simple rules for determining the truth value of any statement as a function of the truth values of the sub-statements that make it up, provided one knows the truth values of the smallest sub-statements. For example, a sentence of the form “A or B” is has the following truth value:

- “True,” if A is true and B is true.
- “False,” if at least one of A or B is false and neither is paradoxical.
- “Paradoxical,” if at least one of A or B is paradoxical.

Notice that these three possibilities cover all the possibilities, so that the truth value of “A or B” is always determined by the truth value of A, the truth value of B, and the given rules.

LP is similar in design to classical logic, but in LP, you can let some statements be both true and false, without all statements being so. LP allows this by having rules that are less restrictive than those of classical logic; they let you infer less from the same premises, so that in particular the principle of explosion is not valid in LP.

A major problem with LP is that its rules are too weak to allow the development of math in the usual manner of proving theorems from a limited set of axioms. I have proven facts about LP which illustrate this general phenomenon. That LP is too weak to allow the development of math in the usual manner is such a broad and imprecise statement that I doubt it can be stated completely and precisely, much less proven.

LP is an example of a paraconsistent logic, meaning it allows for true contradictions without every statement being true.

Moving on to another type of non-classical logic: a *paracomplete logic* is a logic in which some statements can be neither true nor false.

K3, or Kleene 3-valued logic, is an example of a paracomplete logic. In K3, there are three possible truth values: true, false, and neither. K3 has simple rules, exactly parallel to LP’s, for determining the truth value of any statement from the truth value of the statements making it up, provided one is given the truth values of the smallest statements.

Kripke’s “Outline of a theory of truth” is an example of applying K3 to solve paradoxes like the liar paradox. Kripke’s solution works as far as it goes, but it doesn’t go as far as giving an operational explanation of how to reason in the presence of paradoxes, and it’s not clear how to extend it to work for paradoxes of set theory such as Russell’s paradox.

In any case, K3 is an example of a paracomplete logic, one which allows for statements which are neither true nor false.

Some logics are both paraconsistent and paracomplete. FDE, or First-Degree Entailment, is a simple example. It has four truth values: true, false, both, and neither. It works like a fusion of LP and K3.

Substructural logics are another class of non-classical logics, usually defined in the form of a sequent calculus. A sequent calculus is a manner of defining a system of rules of logical inference. It is based on patterns called “sequents.”

A sequent is a pattern of the form “A1,…,An entails B1,…,Bm,” where A1,…,An is a sequence of zero or more statements, and B1,…,Bm is another sequence of zero or more statements. Usually the word “entails” is written as a turnstile. A basic way of reading the sequent “A1,…,An entails B1,…,Bm” is as saying “if A1,…,An are all true, then at least one of B1,…,Bm is true.”

A sequent calculus tells you how to produce all sequents that are “valid,” by repeatedly applying a small set of rules. The system LK is an example of a sequent calculus, one which defines the rules of classical logic.

The system LK tells you what arguments are valid according to classical logic, if you can translate the arguments into the language of first-order logic. Take an argument to consist of one or premises A1,…,An, and a single conclusion B, with the claim being that the premises are true and B follows from them. Suppose A1,…,An and B are written in the language of first-order logic. Then the argument in question is valid according to classical logic, meaning its conclusion follows logically from its premises, if and only if “A1,…,An entails B” is a valid sequent in the sense that it can be produced according to the rules of the system LK.

Sequent calculi have rules known as “structural rules.” In the system LK, there are the following structural rules:

*Weakening:*If you add statements to the premise or conclusion side of a valid sequent, then you get another valid sequent.*Contraction:**Permutation:*

There is also the rule of **cut**, which is not always classified as a structural rule. It is harder to summarize, but you can look at it in Wikipedia’s presentation of LK.

Substructural logics are logics which relax/weaken some of the structural rules of a sequent calculus, with the system LK being a prototypical starting point.

One reason some people are interested in substructural logics is that some people see them as a promising avenue for solving paradoxes.

One example of an approach to paradoxes based on substructural logic is an approach advanced by Dave Ripley and others, articulated for example in Ripley’s paper Paradoxes and failures of cut. This approach starts with an LK-like sequent calculus, and removes the rule of cut.

This approach avoids the problem seen in the case of LP, that the weakening of logic makes too many things unprovable, so you can’t get things like math off the ground. It avoids the problem because the rule of cut is not necessary for the completeness of the system LK. One can remove the rule of cut from the system LK and exactly the same sequents are valid (derivable) in the resulting system. This is Gentzen’s cut elimination theorem. The catch is that sequent derivations not using the cut rule may need to be much longer than derivations of the same sequent using the cut rule.

The interesting observation is that by starting with this cut-free sequent calculus, one can add a concept of truth which gives rise to the liar paradox, without causing the system to explode the way that classical logic does. Ripley’s theory shows how to reason in the presence of the liar paradox, in a way that works and is operationally understandable.

Ripley has also applied his approach to other types of paradox, including paradoxes of vagueness and paradoxes of set theory. Perhaps it can be deemed a generally successful approach to paradoxes. In my opinion, the only part of Ripley’s approach that is not evidently successful is the approach to the paradoxes of set theory. It remains to be seen how capable Ripley’s set theory is of proving the usual theorems of math. Many failures and difficulties have been encountered in this area. Similarities between Ripley’s set theory and LP set theory may be a warning sign of limitations similar to those found in LP set theory. However, as far as I’m aware, it remains to be seen either way whether or not this set theory can be used effectively as a foundation for math. Resolving this uncertainty would shed light on how generally Ripley’s cut-free approach can be applied to solve paradoxes. Does it work on all the hardest paradoxes, or does it fail to apply to some of the hardest paradoxes?

I have discussed only a small and biased selection of approaches to solving paradoxes using non-classical logic. I have discussed each approach only to a superficial extent. We have only skimmed the surface of this subject. I, personally, have not read most of the academic literature on paradoxes. I am limited by the desire to keep this post’s length reasonable and this section not overly laborious or time-consuming for the reader.

Now I will make some general comments on how the ad hoc method of rejecting incorrigible paradoxes compares to approaches to solving paradoxes based on non-classical logic.

I am not aware of any approach to paradoxes based on non-classical logic which solves all of the hardest paradoxes, including the liar paradox, the Russell paradox, etc., which approach is uniform across such paradoxes, and which approach results in a method that is practically usable. I do not rule out the possibility that such an approach can be constructed. However, my own suspicion is that it won’t happen. My own suspicion is that the paradoxes are an intellectual trick of God which no finite system can completely contain or address. I have no proof of that opinion. My opinion is that the opinion can’t be proven, it can’t be proven that it can’t be proven, and so forth.

All currently existing non-classical logic solutions to paradoxes that I’m aware of, insofar as they are successful, are successful in a limited area. On the other hand, the method I’ve proposed can be applied uniformly to all paradoxes. I’m not aware of a non-classical logic approach to paradoxes which can claim the same.

All non-classical logic approaches to paradoxes involve the complexity inherent in non-classical logic. Learning how to apply the full formal rules of classical logic, as expressed e.g. through the system LK, is a lot to ask of people. Non-classical logics provide more stuff to learn, and they are usually more complex and/or less intuitive and/or harder to apply than classical logic. In most non-classical logic solutions to paradoxes I have seen, there is no clear idea of how to operationalize and apply the ideas. In the cases I’ve seen and can recall, when there is an idea of how to operationalize the solution in principle, there is still not a fleshed out idea of how to do it practically.

In contrast, the method I’ve proposed is easy to apply. Indeed, I think the ad hoc method of rejecting incorrigible paradoxes is the method which most people instinctively apply when presented with the liar paradox. The method is so easy to execute that it doesn’t even need to be taught.

Considering these things, I think it’s fair to say that from a practical perspective, the method I’ve proposed has distinctive advantages over non-classical logic approaches to paradoxes. I think most people will probably be sympathetic to my assessment that at least at this time, the method I’ve proposed is more suitable for most practical purposes, compared to non-classical logic approaches.

Where the non-classical logic approaches are better than the ad hoc method of rejecting incorrigible paradoxes is mainly in the theoretical qualities of exactness, formality, and certainty. The non-classical logic approaches provide systems of rules that are not *ad hoc,* but in principle handle all cases, in their domains of applicability. Meta-logical theorems can provide assurances that certain undesired outcomes will not result from applying the rules.

On the other hand, the method I’ve proposed is inherently inexact, and there is the possibility that applying it will fail to protect one from falsehoods that can be inferred from paradoxes, if one somehow unintentionally invokes a paradox in one’s reasoning. This is an area where non-classical logic approaches have the advantages.

In summary, the ad hoc method of rejecting incorrigible paradoxes has the advantages that it is easy to learn and apply and it works uniformly across all cases, whereas the non-classical logic approaches have the advantages of exactness, formality, and certainty.

The question becomes, which considerations are more important? This depends what is important to you.

If your interest in paradoxes is theoretical, and an exact solution to the problems is simply what you want, then you’ll deem that more important than ease of learning and having a general, easily usable solution on the table.

If you’re interested in this topic for some extrinsic reason, e.g. because you want to understand how to reason in general for all purposes, then you will likely favor the ad hoc approach I’ve presented, because you can apply it today to all your paradoxes. Naturally, it’s up to you how you approach these problems, and how much time you spend thinking about them. I would not discourage you from taking whatever approach to paradoxes seems to you to suit your goals.

In my view, the approaches to paradoxes based on non-classical logic have had such tough going because they are going against a general principle, one I call the law of no perfect system.

In general, the law of no perfect system is that there is no perfect system for doing anything. Whatever your problem is, if it’s above a certain level of complexity, then there is no finite and exact set of rules which solves the problem optimally in all cases.

As applied to this case, there is no perfect set of rules for logic which has everything it can correctly have and nothing else, defined formally and finitely.

Gödel’s first incompleteness theorem can be construed as the law of no perfect system as applied to axioms for math. It implies that there is no set of axioms for math, which can be listed by a computer program, which prove everything true and nothing false about math.

The law of no perfect system, as a general rule of thumb, tells you that it is proper for rules to have exceptions. Because no system of rules is able to handle everything, each particular situation should be considered as a particular situation where exceptions to rules might properly apply.

I think the general law of no perfect system is fairly common sense, but as applied to logic, it goes against the grain of the academic literature. I’m not aware of other defenses, besides the present one, of the law of no perfect system as applied to logic. Please let me know of any you’re aware of.

**Implications for reasoning**

What does the ad hoc method of rejecting incorrigible paradoxes have to tell us about general reasoning for practical purposes?

As far as I can tell, the theory has virtually no implications in this area. For practical purposes, people already reason as if they were following the ad hoc method of rejecting incorrigible paradoxes.

By using the ad hoc method, one accepts the possibility that one will be led from truth to falsehood by following the rules of logic, by stumbling across some unseen paradox. I’m not aware of cases like this, but it is a theoretical possibility that this will happen to somebody and that it will matter for their purposes.

One accepts a certain level of risk and uncertainty around one’s use of logic by using the ad hoc method. To put it in perspective, most people carry a much higher level of risk and uncertainty around their use of logic, because they probably haven’t learned any set of rules for logic, and they probably don’t subscribe to any theory about how to solve paradoxes.

**Implications for math**

The ad hoc method of rejecting incorrigible paradoxes, or for short the ad hoc method, has some practical implications for some theoretical math.

Using the ad hoc method has consequences for set theory. The method lets you do math using naive set theory, in which every set you can describe exists, avoiding the need to use a set theory like ZFC whose paradox-avoiding mechanisms introduce complications and limitations.

Using the ad hoc method has consequences for category theory. Category theory has conventionally used paradox-avoiding mechanisms of varying levels of complexity in rigorous presentations. The ad hoc method tells us that there is no need to do this. Paradoxes arise naturally from category theory, but they can be handled by the usual ad hoc method, without a need for paradox-avoiding restrictions. Category theory’s paradox avoiding restrictions usually limit what categories you can talk about, preventing you from talking about sensible categories such as the category of all sets, the category of all groups, or the category of all categories. It’s nice to do away with the paradox avoiding restrictions.

Using the ad hoc method has consequences for type theory. For example, the original impredicative formulation of intuitionistic type theory gave rise to a paradox, Girard’s paradox. Subsequent systems introduced paradox avoiding restrictions which reduced the expressive flexibility of the systems. As in other cases, these paradox avoiding restrictions can be done away with if one is comfortable with the ad hoc method of rejecting incorrigible paradoxes.

This method can even be applied to dependent type theory as applied to software development. People already use logical methods in this context which bear some resemblance-like relation to the ad hoc method of rejecting incorrigible paradoxes.

For example, Idris is a computer programming language based on dependent type theory. In Idris, you can prove theorems in your code, and in particular you can apply this capability to prove that your software has desired properties. In Idris, you can prove any statement. Yet, this is unlikely to happen by accident. Arguably, Idris’ ability to prove any statement is an outcome of overall-desirable design tradeoffs. Normally you wouldn’t use that ability, and the fact that it’s there does not prevent you from drawing confidence from the proofs which you write and Idris verifies. From such a proof, formulated in a normal fashion free of prove-anything back-doors, you can draw practical certainty that the statement proven is true.

Although it can prove any statement, Idris contains paradox avoiding restrictions (a hierarchy of nested type universes). As far as I can see, those paradox avoiding restrictions are not necessary. They’re not necessary to keep the system from proving arbitrary statements, because Idris developers have built the capability to prove any statement into the system anyway. Probably there is approximately nothing to lose from dropping the paradox avoiding restrictions and including the axiom Type : Type which allows paradoxes to be proven. This modification would (arguably) make the system simpler and nicer to use.

Please comment if you’re aware of paradox theories similar to the one laid out here, and to share any paradox research that you think I might find interesting.

## 2 thoughts on “Paradoxes and the rules of logic”