**Quantifiers with Restricted Domains**

This is a cool little section that just shows you how to construct bigger statements. The example they give is “∀x<0 ( > 0).”

I read that as “for all x less than zero, x squared is greater than 0.”

Or, in more human terms, “if you square a negative number, you get a positive number.” Here, we’re also assuming that we’re talking about “real numbers.” More on that later.

It should be pretty clear how to do any version of the above type of statement with any quantifier. All you’re saying is “for some portion of a set of x values, the following is true.”

**Precedence of Quantifiers**

Simple here – the quantifiers always come first. This makes sense because you’re defining the set on which you’re operating. Pragmatically, just remember that the quantifiers all go on the left.

**Binding Variables**

We got some terms here, so let’s define

1) Bound – If you have a statement with a variable, and on the left of it there’s a quantifier for that variable, that variable is bound.

A variable is also bound if it’s simply given a value.

2) Free – If the variable isn’t bound, it’s free.

3) Scope – The chunk of a statement to which a quantifier applies.

The book gives a nice opening example:

∃x (x+y=1)

I read that as “there is at least one x for which x+y=1.”

By (1), we say that x is bound. So, “bound” just means that we stuck a quantifier on it. It could also have been universal or it could’ve been bound to have exactly 2 cases or 3 or whatever.

By (2), we say that y is not bound. There is no quantifier on the left that says anything about y.

By (3), we can say x is inside the scope of the quantifier and y is outside its scope.

**Logical Equivalences Involving Quantifiers**

This section just explains that different arrangements involving logical quantifiers can be shown to be logically equivalent. They note that the following two statements are equivalent:

1) For all x, P(x) and Q(x) are true.

2) For all x, P(x) is true and for all x, Q(x) is true.

So, for example, if x is all people, we can define P(x) as “all people are saints” and Q(x) as all people are sinners.

The first statement ∀x(P(x)∧Q(x)) simply says “all people are saints and sinners.”

The second statement ∀xP(x)∧∀xQ(x) says “all people are saints and all people are sinners.”

It’s pretty straightforward that those are the same, I think.

**Negating Quantified Expressions**

This one can be a liiiittle hard on the brain, so I’ll go through it intuitively, then give you how I visualize it for mnemonic purposes.

Here’s the basic principle:

1) ¬∀xP(x) ≡ ∃x¬P(x)

2) ¬∃xP(x) ≡ ∀x¬P(x)

For the sake of this discussion, x is the set of all people and P() means the person in question is a saint. So, P(x) means “person x is a saint.”

In human terms, the left side of (1) says “It is not the case that all people are saints.” The right side of (1) says “There is at least 1 person who is not a saint.”

Hopefully you can see readily why these are the same. One clearly implies the other – if less than everyone is a saint, we don’t know how many people are not saints, but we know that at least one person must not be a saint.

Similarly, the left part of (2) says “It is not the case that at least one person is a saint.” The right says “For all people, none are saints.”

Again, it should be clear why these are the same. The first says “not even one are saints.” The second says “none are saints.

These are called De Morgan’s Laws for quantifiers. You can hopefully see how they relate to De Morgan’s laws from earlier, with the universal quantifier being like the conjunction and the existential quantifier being like the disjunction.

Here, I use the same visual mnemonic to remember the rules. If you have a NOT sign, you can punch it through the quantifier, but only by flipping it from universal to existential or vice versa.

The neat thing is, once the not sign is inside the quantifier, it can wreak even more havoc by punching through your statements via De Morgan’s laws.

A way that is perhaps deeper than my “not sign is a battering ram” to think about it is to remember that “NOT” is sort of like a switch. Not-on is off. Not-off is on. Not-happy is sad. Not-sad is happy. Not-both is one or the other or neither. Not-one or the other or neither is both.

So, when you run that NOT sign through your statements, you’re basically having it flip switches as it goes.

You may say to yourself “why do I need all these fancy signs for something intuitive.” The thing is, it won’t always be intuitive, at least not at first. Your human brain can’t very well deal with giant statements with tons of variables. But, your human brain (and your human-made computer) can readily deal with simple rules such as De Morgan’s laws. By knowing this small set of operations, you can deal with big scary statements, which may allow you to arrive at surprising conclusions.

**Next up: Translating from English into Logical Expressions.**

First of all, I’m a huge fan, Zach, of both SMBC and the Weinerworks. I’m really enjoying the posts on Discrete math and I was wondering if you could just resolve a question of mine; this type of logic-based math is the basis of computer programming, right? My calculus teacher has always given me an unsatisfactory answer when I ask him where it’s applicable.

Hi Patrick, if you ever programmed anything you would see how aplicable it is. Basically, computer is doing “crunching”for you, its just a complicated calculator. If you will do calculus about “mathematical methods”you will see, that there is big number of repetetive tasks and this is where coputers got into play :) Also things like “trinagle method” or thins like that … also, first programmers were mathematicians, you can see it in every comp.language [sorry for spellling and maybe inacurate terms, I studied in different language]