Here is a recursive definition version of the same thing.
X is in Power({}) iff X = {}
X is in Power(S union {a}) iff (X is in Power(S)) or (X - {a} is in Power(S))
It should be clear how to prove that Power(S union {a}) has 2^(n+1) elements if you assume that Power(S) has 2^n elements. I.e. each element X in Power(S) contributes precisely 2 elements to Power(S union {a}), namely X and X union {a}.
Well, let S denote the recursively defined set:
-- 4,5 are both in S
-- if x and y are in S, then so is x+y.
It should be clear that a number is in S iff it is an amount that can be obtained using 4 and 5 cent stamps.
Strong or second principle of Induction: [P(1) ^ ( [P(1) ^ P(2) ... ^ P(n)] --> P(n+1)) implies P(n) holds for all n>= 1.
Sometimes with this second principle of induction we have to establish more than just the base step n=1. Not because the principle says you have to, but rather because we can only perform the proof of ( [P(1) ^ P(2) ... ^ P(n)] --> P(n+1)) for values of n larger than some fixed number. Then we need to go back and do the smaller values of n individually. Let's start with the inductive step and see what happens: We take some n>= 12 and assume that 11< k <= n implies that k is in S. i.e. we are assuming P(k) for each such k. Well if we knew that n-3 was in S, then, knowing that 4 is in S, we can deduce that (n-3)+4 = n+1 is in S. So this is valid for all n> 15 so to finish the proof we have to just check that 12, 13, 14, and 15 are in S some other way. Well, just use inspection:
12=4+4+4, 13=4+4+5, 14=4+5+5, 15=5+5+5.
1+nh <= (1+h)^n .
Check this for n=0, 1 is <= (1+h)^0=1.
Now assume it is true for some fixed n, and let's prove it for
n+1. We must prove that 1+(n+1)h <= (1+h)^(n+1)
and we can assume that 1+nh <= (1+h)^n. Don't forget we also
know that n>= 0, and h>-1.
Well, 1+(n+1)h = 1+nh + h < (1+h)^n + h by the inductive
hypothesis.
Now, look at (1+h)^(n+1) = (1+h)^n * (1+h) = (1+h)^n + h*(1+h)^n.
So by the above two lines, it is enough to prove that
h <= h*(1+h)^n.
If h> 0, then this is equivalent to dividing both sides by
h:
1<= (1+h)^n which is true because 1+ h> 1 (in this
case).
Now assume that h=0, and just check that h = h*(1+h)^n.
Finally assume that -1 < h < 0, and again divide both sides
by h but this time reverse the inequality:
1 >= (1+h)^n, which is true because 0< 1+h < 1.
If n=1, then n^3+2n=3, so 3|3.
Assume that 3 divides n^3+2n and now think about (n+1)^3+ 2(n+1).
Well let's multiply it out to see if we can get the inductive
assumption to help:
(n+1)^3 + 2(n+1) = n^3+3n^2 + 3n+1 + 2n + 2 = (n^3+2n) +
(3n^2+3n+1+2)
Inductive assumption says that 3|(n^3+2n), so we just have to
prove that 3|(3n^2+3n+3), which is obvious.
Here's a nice proof without induction. It is equivalent to asking if n^3+2n is equal to 0 mod 3. Since 2 == -1 (mod 3) we can say that n^3 +2n == n^3 - n (mod 3) == n * (n^2 - 1) = n * (n-1) * (n+1) (mod 3). But this last expression is the product of 3 consecutive integers, so one of them is a multiple of 3!
It is very clear for n=1.
Assume this is true for any list of numbers a_1, ..., a_k for any
k less than or equal some fixed n. I.e. we are assuming P(1),
... P(n). Now consider n+1 numbers.
We can write a_1 * ... * a_(n+1) as c * a_(n+1). By inductive
hypothesis we have that gcd(b, a_1*...*a_n) =1 and by assumption
we have that gcd(b,a_(n+1))=1. Now c * a_(n+1) is the product of
2 numbers and b is relatively prime to both of them. If n> 1,
then we could just P(2) to deduce that gcd (b, c*a_(n+1) )=1. So
this handles every case except n=1 and checking directly P(2).
You don't often get
away without having to actually prove something!
This case, P(2), follows easily from the following result in the book: if d|(a*c) and gcd(d,a)=1, then d|c. So, assume that 1< d=gcd(b,a_1 * a_2). Now clearly d divides a_1 * a_2 and also b. Since gcd(b,a_1)=1, no divisor of d can divide a_1 (since it does divide b). Therefore, gcd(d,a_1)=1 and d| a_1*a_2, allows us to deduce that d|a_2, contradicting that gcd(b,a_2)=1.
Question 6 P209. Give a recursive definition of a_n where
But isn't that interesting, keep adding the twice the n'th number and the expression a_n results.
Another summation formula for free: notice that by the recursive definition of a_n, a_n is equal the sum 1+3+...+(2n-1).
Easy induction. Check it for n=1
directly. Ok, did you do it?
Now assume it is true for all k <= n. The sum of the squares
up to f_(n+1) is equal (by the induction hypothesis) to
f_n * f_(n+1) + (f_(n+1))^2. Factor out f_(n+1) to get
f_(n+1) * (f_n + f_(n+1)) and, hey!, that's the definition of
f_(n+2).
Check it for n=1 and why not do it for n=2 also just in case we need it. I'll wait a minute while you do that. Click here to continue
But maybe maxbinary(a_1,...,a_n) might be more efficient:
maxbinary(a_1,a_2) := max(a_1,a_2) (as above), and
maxbinary(a_1,...,a_n) := maxbinary( maxbinary(a_1,...,a_m),
maxbinary(a_(m+1), ..., a_n) ), where m = [(1+n)/2] just like in
the binary search algorithm.
Did you notice the glitch? I had better put in max(a_1)=a_1 or else put in a test for the case that n or m is just 1.
It is clear that everything in S is a polynomial with integer coefficients, but is it clear that every such polynomial is in S? Prove it by induction.
For k=n+1, i.e. f_(n+2) * f_n - (f_(n+1))^2,
we have to try to reduce it to a previous case. Let's
try replacing f_(n+2) by f_(n+1) + f_n and see what happens:
(f_(n+1) + f_n) * f_n - (f_(n+1))^2 =
f_(n+1) * f_n + (f_n * f_n) - (f_(n+1))^2 =
Now, after trying several different things, I discovered that I
should replace just one of the f_(n+1)'s in the square:
=
f_(n+1) * f_n + (f_n * f_n) - f_(n+1)*(f_n + f_(n-1)) =
(f_n)^2 - f_(n+1)*f_(n-1) (the other two terms cancelled):
but this is the negative of the n-th term, hence it equals
(-1)^(n+1) which is what we needed.
Return back to the next question
Do the general case similarly. By the previous case we assume that w starts and ends with a 0. Check that if w had a least two more 0's than 1's, then it would be easy (i.e. induction hypothesis and w = 0x both in S). Starting from the left count 0's and 1's of w (it starts with a 0) until you come to the first place that the number of 0's and 1's at that point are equal. There is such a place before the last position because we know that w ends with a 0 and only has one extra 0. Go back one position and call that x. Now w=x1y for some y (which ends in 0 but that doesn't matter). You must just now argue that both x and y are in S.