Interesting Problems in Section 1.2
One of the neat things in this book is that they put new information in the problems. This is nice because it really encourages you to go through the problems to find all the cool extra tidbits.
So, I’ll basically go through for the tidbits and leave the rest to you. Essentially, that’ll mean me going through for bold words in the problem set.
Problem 34: Duals
This is a a pretty straightforward notion. The “dual” of a proposition is when you have a proposition containing only basic moves: ∧, ∨, ¬, T, F, and you just flip them to their opposite. AND becomes OR. T becomes F. Note, the NOT sign stays the same.
I honestly don’t know enough discrete to talk with authority about why duals are useful, so if someone out there has some insight, I’d love to hear it.
The problems on this topic are pretty straightforward. I have to admit though, I was expecting them to toss in a mild curveball and have a → thrown in, forcing you to convert it to a disjunction and THEN take the dual.
Don’t read this problem. It’ll confuse you. Here’s all it’s saying:
Say you have some propositions and their negations.
Then, let’s say you write a bunch of logical statements, all of which are of one of these types: single proposition, negation of a single proposition, propositions and/or their negations connected by an AND sign.
Then, let’s say you take those certain types of statements and remove all of which are not true according to your truth table.
Last, you draw OR signs between the remaining statements. Now, you’re in disjunctive normal form.
In other words, you have a bunch of statements where each member must be true for the whole statement to be true, and then you connect each such statement with an OR sign.
So, here’s an example: (A∧B∧C)∨(D^¬F)
Here’s another: A∨B
Got it? It took me a little while to get it, but it’s a fairly simple idea. You’re basically assembling a big list of every possible way to get a true proposition, then separating each member by an OR sign. This makes a giant compound proposition, which is true so long as any one of its members is true. Got it?
Here, we are introduced to the idea of functional completeness. Like DNF, functional completeness can be hard on your brain because it’s a very simple idea. In fact, I kinda feel like that’s Discrete in a nutshell.
Functionally complete basically means this: If you have a set of logical moves, and by using some combination of these logical moves you can describe any possible proposition, then your set of moves is functionally complete.
Here’s an analogy: Think of a rook alone on a chessboard. The rook has one legal move – to go in any vertical or horizontal direction as far as you want it to. By using only this move over and over, the rook can reach every square on the board.
Interestingly enough, the same is true of a knight.
The knight has one very specific move it can make, but that move allows it to reach every square. To reach every square requires a big specific set of actions (unlike the rook), but you still can hit every position.
On the other hand, the bishop cannot hit every square. No matter how cleverly you push the piece around, it’ll only ever hit squares that are the same color s the color on which it started.
So, in this analogy, the first two are functionally complete and the last is not.
In discrete, as 43, 44, and 45 show, you can construct very simple functionally complete sets. NOT, AND, OR is complete. But, for example, just having OR is not complete because I can construct propositions (such as if a then b) which cannot be expressed only using OR. So, just having OR would be incomplete.
Interestingly, since De Morgan’s Laws allow you to move from AND to OR and back using a NOT sign, the sets (NOT, AND) and (NOT, OR) are both functionally complete.
I was hoping to do this all as one update, but there are still two big concepts remaining, and I’d like to give them their due.
NEXT STOP: NAND AND NOR