x is a interior point of a set S. So if you make a ball with a certain size centered at x and if the ball is included in S. Then what? x is interior point and a set of points like x is **interior of S.**

- boundary point: if you draw a ball centered at the point y and for any radius, if it makes a ball that some kind of half in and half out, we call the point is boundary point of the set and the set of points like y, we call
**boundary of S. **
- Open Set: A set is called open if it doesn’t have a boundary point e.g., {x|2<x<3} (i.e., if all the point in the set is interior points.)
- Closed Set: A set is called closed if it contains all the boundary points {x|2≤x≤3}
- Closure: Closure is the union of the open set and it’s boundary. Essentially, it is just closed version of a set. e.g., water ball: it got inside filled water and got outside to make us to be able to grasp it.
- neighborhood: It’s actually same idea of interior point. It’s just difference of perspective. This is the perspective of dots. What I mean by that is For a dot, a set is its neighborhood of it if the set contains the dot as it’s interior point.
- 1.4 proposition (a): S is open ↔ Every points in the S are interior points.(No boundary points!)
- 1.4 proposition (b): S is closed ↔ S^(Hey, this is my S complement.) is open (e.g., S={x|2<x<3} then S^ is {x|2≤x≤3})
- Some Weird Example of open set and closed set [http://science.kennesaw.edu/~plaval/math4381/openclosed.pdf]

1) R is both open and closed.

– The reason: All the points in R is a interior point of R (i.e., we can draw limitless of balls which is fu..n?), so It’s open by the definition of Open set. Then the complement of R, Ø should be closed. However Ø is open, because it doesn’t contain any boundary point(i.e., it’s empty). Hence R should be both closed and open because of the complement of it is open and by the definition of open set. Doesn’t it contradict with 1.4 proposition? Hey, you silly. 1.4 proposition didn’t tell you that the set is only either open or closed(i.e., it can be both closed and open!) 2) Ø is also both open and closed – The reason: Because R is both open and closed. 3) [a, b) is neither open nor closed -The reason: Firstly, It is not open ,because a is a boundary point(i.e., So not all point in the set is interior point). Secondly, It is also not closed, because because it’s not including all the boundary point.(i.e., It is missing a.) Hence, It’s neither closed nor open. ** One TA gave me a good insight of openness, if you pick any point and if the point is anytime included in an interval in the set, the set is open! Take a look PS3!

- Convexity and Open Set: Open set is convex. By mean a Convex Set, every points in the set and the all the line segments connecting them should be in the set.

Open ball is ,by definition, it’s all the points x ,such that |x-a| < r and let b and c in the open ball, then it’s like this

Since |c| , |b| <r then all the point that connects c and b let’s say y, then |y| < r. So it should be in the open ball.

- limit of a function: A real number that all the values of function are approaching in the restricted boundary. (i.e., take one interval, and you can see there is a cluster and the cluster is centered at a real number then the real number is king= limit. One person named f(k) from an interval country: “Hey, let me tell you a secrete. There is a funny story that OUR KING might not be the same species of us.” i.e., The King ,limit, doesn’t need to be a function value.)

like x=1, the limit is 2 but it’s not a function value. OUR KING is some other species. (Alien!!)

- continuity: If OUR KING is our species then it’s continuous

Challenge Problem: xy/x^2 + y^2 (y,x≠0)doesn’t have a limit at 0, Why? -The reason: Well denominator and numerator are all going to 0, hence, we can apply lopital’s law(I don’t know how to write his name but it sounds like that and I don’t have any intention to look up his name and write it down because he got some interesting character in his name that I can’t type in my English keyboard). However, I haven’t learned anything about multivariable calculus yet. Hence we need to find some other way to prove this which is great.. Let’s think a line in R^2 that goes through the origin “y=cx” then we can substitute xy/x^2 + y^2 to cx^2/(c^2+1)x^2 then it goes to c/c^2 + 1 when x is approaching to 0. Since c hasn’t defined, we can’t define the limit. Hence the limit doesn’t exist (i.e., there is no cluster,follower, hence no King.) You can’t imagine the limit in R^k for k>2. Well I was ,too but eventually I understand. Just think your carpet on the floor and just lift it with your thumb and index finger so it’s kind of being lifted like pointy hat. Now look what you got! you grasp the king(the limit) our limit world between your finger. Imagine that the weird R^3 looks like function as your carpet and you just lifted it bit and the pointy part is OUR KING. Capturing a king was so easy.

- Continuity of composite function: if f is continuous and g is continuous, then f •g is continuous.

Q: Let f1(x,y) = x + y, f2(x,y)= xy, f3(x)=1/x, f4(x,y)= x-y, f5(x,y)=x/y. Then Let’s prove f4, f5 are continuous on {(x,y)| y≠ 0} using f1~f3. A: Blah, blah stuff. Well, it sounds like we need to use composite function property. Let’s see they are all either function on either R1 or R2 and Let’s see f4 can be made of a(x,y) + b(x,y)(i.e., f1(a(x,y), b(x,y))). Why? Well, there is no – in any other function ,so we can’t get it from anywhere so we need to use + but there is no + in anywhere than f1. It is just my instinct. So we will see if it’s right or not. If it’s not working well ,then I could just delete it ,so I wouldn’t be very embarrassed. Then how can we make a(x,y)= x and b(x,y)=-y Can we? Let’s see. oh! 1/x * -1 = -1/x, xy * -1/x = -y. So b(x,y) = -f3(x) * f2(x,y) = f2(f2(f3(x),-1), f2(x,y)). Sigh, Let’s go to a(x,y) = x = xy * 1/y = f2(x,y)*f3(y) = f2(f2(x,y), f3(y))! Yay, then f4(x,y) = f2(f3(x), f2(x,y). Hence f4(x,y) = x -y = f2(f3(x), f2(x,y) + f2(f2(f3(x),-1), f2(x,y)) = f1( f2(f3(x), f2(x,y), f2(f2(f3(x),-1), f2(x,y))) You know what? That was the best proof that I’m super silly! Haha, My text book is giving me this answer. **Text book answer >> f1(x, f2(-1,y)) !** Haha, My answer will be **THE PERFECT ANSWER** if you mock your teacher though. However, there is a saying in Computer Science, **“KISS” (Keep It Simple, Silly!). **I know computer science is bit b**ch subject. Anyway, I got a good lesson and hope you learn also from this silly action that I take. (You can at least learn that I’m super silly.) Oh, and we almost lost the key point! So, f4 is continuous? The answer is.. YES. Because it’s just simply composition of continuous functions! You might ask.. Why? Why the hell the composition of continuous functions are continuous. Well, there is a proof which is seems complicated to me,(I would’ve been good if I could teach KISS principle to the author of my text book when he was writing any proof.) So I’ll just explain it. So if f and g is continuous then, f•g is continuous. What that means? Well, that means f, g are continuous function? That is same as f and g is nice bridge without any hole in the middle. What is mean by making a composite function from them? That means we just connect 2 bridge. So Now you tell me, can I just cross the bridge called f•g or g•f? Please, be responsible for your answer! We’re trying to across a bridge that connect two dangerous cliffs. The answer is …YES! Easy ,huh?

- Sequence: “The ordered collection of mathematical objects. It can be finite or infinite. Can be written as {Xn}. We call the sequence is in a set, if and only if all the elements in the sequence are belong to the set.”

What? what is ‘Mathematical object’? I never eaten those things in my 21 years of life! So I looked it up and saw a lot of blah blah things. To sum it up in just one line, Mathematical object is anything that is related to mathematics which means it’s the worst collection that you can have! If someone is saying that “Tara, It’s a gift for this Christmas for you, mathematical object! It’s such a collection that is most practical in this world” , “BANG!” I think I have enough reason to shoot him with my water gun. It’s like a bad poison collection to me! Just listen what they have in the collection, then you will nod your head! “Vecotr, Set, Points, Functions, Triangle, Topological Space..etc” PLEASE STOP. By just listing them, gave me a headache. “BAM! BAM!” I reloaded my water gun for my safety reason.

- Sequence and Set: We said that sequence is the collection of mathematical object. But is there anything that reminds of you if you heard of collection? Well, it starts from ‘S’.. Yes, It’s SET! Do you think the sequence can be called a set? I thought so, one minute before. I just looked the text book and “feel bucked up”. Yup, we can’t call it. Yeah, well that’s why we have different terms for them. The reason can be easily explained by this example.

E.g., Xk = (-1)^k then the sequence is like this,

-1, 1,-1, 1, -1, 1,-1, 1,-1, 1,-1, 1,-1, 1,-1, 1,-1, 1,-1, 1,-1, 1,-1…….

But, the set will be like this,

{-1, 1}

Now, Tell Me. Looks same? Yup. That’s why I felt bucked up.

- But well, what about a sequence in a set?

Well, that maybe a different argue from the above one. Because now the sequence is in a set, hence, there can’t be duplicate values. Well, then we can call the sequence as a ‘rule’ to give an index to each member in the set.

- Convergence of a sequence: What is a convergence of a sequence?

Well, check this out.

8,4,2,1,0.5,0.25,0125………

What do you think?

It will eventually converge to ‘0’. Since the formula for the sequence is 16/2^k.

So if a sequence **approaches** to an unique number, then we call the sequence converges.(Note that I used the term ‘approach’, you don’t need to have 0 in the sequence to talk about convergence.)

The definition of convergence of sequence is like this.

A sequence Xn converges to “A” if and only if, for all ε, there is a natural number ‘K’ such that for all ‘n’ > K, |Xn – A| < ε. Otherwise Xn diverges.

Does it sound “blah” to you? Well, I can give you a simple example. Let’s say I claimed that my score for the tests converges to 100 when I graduated U of T. By saying that, I mean my score approach far close to 100 than any other number. So it’s not approaching 99 or 99.99999! The number that is approaching MOST is 100. So give me any positive number!(aka ε,Epsilon) SO you gave me 0.00000000001 and told to me “well is there any point of you school life that all you score after that point, let’s say x and 100-x is less then 0.00000000001? then I can say it’s approaching close to 100”. Then I’ll just laugh at you and I’ll say, “Come on! give me any number! I can make it. Even if you gave me 0.000000000000000000000000000000000000000……000001, I can find some point of my school life after that my score X, 100-X is less than that one , So NOW tell me! Is the sequence of my grade is approaching 100?”

Can you say No? Think about it and compare my example to the definition if you want.

- So we talked about the convergence of the sequence, but what about divergence? Divergence has bit of spectrum.

Yes, actually mostly we say there are at least 2 interesting divergences. One is a sequence that is approaching to -∞ or +∞ and another one is oscillation.(the graph looks like is vibrating) What’s the specific definition of Xn that goes to infinite? I’ll show you only one that goes to the positive infinity. We define a sequence Xn goes to the positive infinity, if from a certain point,let’s say k, X k > c for every c. So from a certain point, if you give me any number I have bigger one.Give me 10000000…..0000000000…00, well I can find a point, from that all the numbers that I have are bigger than that. Yup, undeniable. That’s the charming point of Mathematics.

- Do you remember that I said a sequence is a collection of any mathematical object? Let’s think of Xn as vector. Then, what is mean by Xn converges to L?

Before think about that, take a look at this cool inequality(1.3 in the textbook). Let’s say we have |X-A|(I just stole it from the definition!) and X and A are both vectors. By the definition of the convergence, |X-A| is less than ε. Then let’s expand it. If we expand two vector, it looks like |(x1-a1, x2-a2,….)| Then by the 1.3 inequality, max(|x1-a1|,|x2-a2|,|x3-ax|,|x4-a4|….) ≤ |(x1-a1, x2-a2,….)| ≤ ε. Hence every component in X approach to the confronting component of A.

Theorem 1.13

- Do you remember the closure of a set? From now, we talk about what is mean by a point is in a closure of set.

So random thing? Yes, but it is related to what we’ve learned just (i.e., sequence). Closure is like an entire house, including walls(boundary). What’s meany by something is belongs to the house? Let’s imagine a diamond in your wall. It’s belongs to your house and how you prove that belong to your house? You will probably get a drill and try to make a hole to the diamond and if you see the diamond other than outside, you can say that the diamond belongs to your house. Yeah, that’s the same idea. If there’s is a sequence inside the set that is approaching to the number, then you can say that number belongs to the closure of your set. If this example is too complicated for you, I can explain to you in a different way. Then we need to start what is mean by x is belongs to the closure of a set S. There are 2 cases for the situation. First, it can be an interior point of S. Second, it can be in the boundary.

Let’s think about the first case.

So, 3 is in the set S. You can find a sequence that has limit as 3. The sequence is 1,3 or 1,2,3, or 5,4,3 anything that goes to 3. So 3 is in the closure of the set S. Then, let’s think about when 3 is in the boundary of S.

When we think of the boundary case, I thought it’s better for me and you to think in R. So a set S is (0,3) and 3 is obviously is int the wall(the boundary), by meaning 3 is in the boundary that you can draw balls(look at the violet circles that I draw) in any size and all of them is half out of the set and half in the set. But, look closely the place of the half side that including yellow dots. Yellow dots are interior points and can you see that those yellow points are approaching to 3? Yes, So we can get all the yellow boys and call them as a sequence. So if 3 is in the closure of set S, then we can find a sequence that is approaching to 3.

Clear?

That’s the theorem 1.14 in the text book.

**My Proof for 1.14:**

Let’s look at 1.15. It’s bit more developed than 1.14 but it’s easy, if you understand what is mean by continuous in R^2.

Lets’s review, what is mean by being continuous in R^2.

I think this picture is capturing what is mean by continuous function in R^2. It’s saying when x is fairly close to a certain point c, if f(x) comes close to f(c), then we can say f(x) is continuous at c.

This is what 1.15 theorem.

Can you see the similarity between them?

We can actually make two statement as close as possible.

1. What is mean by x approaches to c

↔That’s equivalent to {Xn} converges to c. But why the text book saying that any {Xn}?

In R^2 case, there is X-axis and there is only 2 way to approach to a point. right side and left side. To check the continuity of the function we need to check both sides. But, in R^n case.. we can approach to a point from every side. (Think about R^3 case, which represent our world. There’s a dot on a paper how many ways we have to approach it? A lot.) Hence they said, any sequence {Xn}, so we need to prove the continuity of f for arbitrary sequence {Xn}.

2. What is mean by f(x) approaches to f(c)?

↔ For those {Xn}, f(Xn) is f(c)

It’s essentially same, right?

It’s truly obvious that 1.15 is true, but can you prove it?

They way to prove it is fairly easy.

(A)You need to prove that {Xn} -> c then f({Xn})-> f(c) if and only if (B)(i.e., is same with) f is continuous at a.

Assume that f is continuous at a, then you can get the other result by using the definition of continuity and since to prove the equivalence, we need to prove for both ways, and the other can be too obvious and easy. So let’s take a negation of the (B) and prove the negation of (A). It’s fairy easy to see it. But for the readers and my mark, I’ll post the picture of the proof. (Soon)

I just finished the proofs for 1.14 and 1.15 but they are a page for each.

Well. Let’s move on to the next section “Completeness”.

Completeness. Have you ever seen such a thing? Well, I haven’t including myself. (In fact, I’m the one of the most incomplete being in this world, I think.)

I have a roommate who works at the restaurant in the dessert section. He’s such a nice roommate. Anyway, sometimes he burned his arms and cut his hand by mistake like most people who work in restaurant do. So one day, by him, he was searching for a bandage to stop the bleeding from his hand. He saw the executive chef in the restaurant and asked him ,if he knows where is the bandage. But he just simply answered, “I don’t know, because I don’t cut my self.” Isn’t that cool? I think Something like completion and perfection is the limit value of your skills when your practice goes to the infinite.

Anyway, that was my definition of the completeness. But what’s the completeness in the calculus?

Completeness in the calculus is simply there is no “hole” between the elements. So if I compare to my example, it could be no “mistake“. For me, it sounds like “continuity”.

So if you got a,b and if b>a (i.e., if there is any possibility of having a hole)then there should c in the set, such that b>c>a.(then there should be something that cover the hole).

There is a property of Completeness in R. We call it as “The Complete Axiom”.

**The Complete Axiom**

E.g., S = (2, 3). Then supremum of S is 3. Then infimum of S is 2.

Sup S:

Inf S:

Due to this completeness, we have few theorem to study. Hate it? Well, you’re the one who stepped in to the hell. Welcome! (It’s actually same way that I feel. I’m in CS..I don’t need to take this hard course for any reason. But I just wanted to study deeper…)

Complete axiom is sometimes showed up with the compact sets.

That implies that the bounded set has an sup(S) and inf(S), and if we use the deifinition of sup(s) there must be a convergent sequence {x}_n to inf(S) and sup(S). So if S is compact, then sup(S) and inf(S) are included in S.

First Theorem is, 1.16 The Monotone Theorem

What is mean by **“the sequence is bounded? and the sequence is monotone?”**

So if all the x that belongs to {xn} is included in a small interval (a,b), then we can say that {xn} is bounded. Monotone sequence, is either the sequence is increasing and decreasing sequence monotonically.

Let’s prove 1.16.

There is an another interesting theorem called (The Nested Interval Theorem).

My prof compare this theorem with killing a fly. If you capture a fly with your two hands with some space between two hands like this..

So if you kept your hands close enough until they meet together, the fly will be die…

Such a nice example right?

Let me prove this here.

The main point of this theorem is all Intersect of Interval is not empty and it contains one element.

Let’s take another theorem

This Theorem actually used the 1.17(The Nested Interval Theorem) to prove it.

Let’s take a look at 1.19

1.19 is the generalization of 1.18. But Now I’m bit confused with the part of putting all the component in 1.18

To take the last theorem for completeness, we need to know what’s the Cauchy sequence.

The definition of Cauchy sequence in real number is like below.

So, it’s the sequence that is approaching to each other but we don’t know whether it’s decreasing or not. The general form will be like the picture above. (can you see that all the sequence are getting closer?)

So the 1.20 Theorem is claiming that “a whole sequence” is convergent if and only if it’s Cauchy.

Cauchy sequence has an importance, when we prove that the lub(S) and glb(S) is in S if S is compact.

I’ll prove it later.

So, we’re done with the completeness. Completeness is all about “Convergent” and “Bounds”.

So let’s go to the Chapter 1.6(Compactness)

We call a Set is compact if it’s closed and bounded. Like [2,3]. As the book said, compactness is important because it’s yielding important facts about limits. I don’t know what’s saying right now. SO why not take a look at?

1.21 Theorem (Bolzano Weierstrass Theorem- I don’t know whether I typed correct)

We call this theorem as BW1 in my class because the name is very inefficient to use.

Well, if S is compact that means it’s closed and bounded. If it’s bounded set, it has a sub-sequence converged to a limit and the limit is included in the boundary of the set. Lastly since s is closed, it’s containing the boundary of the set. Hence if S is compact then every sequence of S has a convergent sub-sequence whose limit lies in S. That was Bw1.

I’ll post the proof later.

This is the different version of explanation of the Bolzano – Weierstrass Theorem from Wiki.

The interesting thing in here, was A. I know it’s stating B from the text book but it didn’t tell me that A also can be inferred from the theorem.

But just think let a set S, S= {Xn} be a set if Xn is bounded then S is bounded.

If S is bounded there is a sequence which has a convergent sub sequence.

There is only a {Xn}, so Xn should have a convergent sequence.

That’s how I understood but bit uncomfortable with it actually.

******** we will think about it later

Do you remember Theorem 1.15? If you remember it, it’s rather easy to prove.

In 1.15, we got the definition of continuity of function in R^n.

In R^n, f is continuous if there is a sequence of Xn that converges to a and if f(Xn) converges to f(a). Since S is a compact set, it has limitless convergent sub-sequences and the limit is also included in S. Also, f is continuous function so f(xn) should converges to f(a) when Xn converges to a. Since a is in S ,f(a) should be in the f(s). Then every sub-sequence of f(S) is convergent and the limit is included in the f(S). Hence, f(S) is convergent.

1.23 Corollary

Since f(S) is compact and bounded we can get the sup(f(S)) and inf(f(S)). Also, every sequence in f(S) is convergent. Hence, we can get the monotonic sequence out of it. Thus by 1.16 the monotone sequence theorem, the inf(S) and sup(S) are belong to f(S).

Let’s go to 1.7. It’s about “Connectness”(finally. Now I can solve the problem set 3)

The idea of Connectness is pretty easy. My textbook is saying, it should be all in one piece. So if you put together them, it act like one. Like this.

How do we know it’s one piece, if we put together?

Well, it’s just okay. If we know that this can’t be one piece when we put together like this.

Let’s say our set is the union of the man and the god but they’re separate and that was the also point of this picture.) So our set is not connected.

So if we got the basic idea than let’s crash the theorems!

To prove 1.25, we know what’s mean by interval in R^n.

The idea of interval is pretty simple. If you have points “a” and “b” in the interval S and a < b then there should be c that is a < c < b in S. So that means actually, there’s no hole in the interval. “NO hole”.. it’s kind of reminding me something. Yes “Completenss!” Well, I don’t know whether it’s related or not ,but we will see.

I just looked at it the proof, but for 1.25 I think the idea of Completeness doesn’t related with it. I’ll post it later.

It can be proved by using 1.14 which talking about a limit and the boundary. I’ll post the proof soon. (Before I got a quiz for mat237)

Did you forget what what was the Intermediate Value Theorem? Oh, well not me this time! Yay. Hope you didn’t but I’ll just post for later use for me or you.

So that means if you have a continuous function f and f(a) < f(b) , there should be M such that f(a)< M< f(b) and c2 such that f(c2) = M. For me, it looks like an extended version of completeness..

There is another important notion of connectness. It’s called “arcwise connected” or “pathwise connected”

“Arcwise conneted”(or pathwise connected) is much stronger version of connectenss.

The definition of “Arcwise connected” is like below.

So if you find a path(a continuous function to connect any two dots), then it’s a pathwise connected set.

We can see that the arcwise connected is a strong version of connected from the theorem 1.28.

I”ll post the proof soon ,but before that is the converse possible? The answer is “No”. The example of it given from the text book is like this.

You can’t find a path(a continuous function) from S1 to S2 but it’s still continuous. Because there is a point that in S2 is in the boundary of S1.

I found really nice an explanation for that and added my explanation on top of it.

*http://planetmath.org/encyclopedia/TopologistsSineCurve.html*

You can find the detail from the link that I posted above.

We saw the relationship between arcwise connected and just connected and found out that every connected set can’t be connected but the interesting is every open and connected set is arcwise conneced. That’s the last theorem we will look at about connected.

Since open set is the set of bubbles! You can make any bubble inside and go around in the bubble. But why not closed set?

Well, we saw the counter example of it already for 1.29. The reason why one subset of it can have all the boundary points of another subset. But if another subset somehow can’t go through to the boundary point using a continuous function then it can’t be arcwise connected.

So I mean there is some change that you can’t include the boundary point in your bubble. But the fact, by the definition of open set all is the bubble and you can make as much as you can, so you can go everywhere. But let me prove this. (post it later)

**Side kick: Quotient rule **

**1.8 Uniform continuity(Not included in course material)**

The idea of Uniform continuity is very simple. It means simple continuity + Sort of Bounded(so it doesn’t increase or decrease rapidly a lot up to the point that we lose our control)

So for simple continuity, it’s okay if you can draw in once with your pen without any stopping.

but for the uniform continuity, you can’t go crazy sometimes.

Usually, when we talk about simple continuity, we talk about a point but this is about all two random points in the function.

So like this 1) simple continuity 2) uniform continuity

So the given formula for this is like below.

So if you give me the length : how close two point should be then I should bound the value of two points. SO for example x^2 is not uniformly continuous. because if you go x,y -> infinity then the difference of f(x) and f(y) -> infinity too.

Let’s go to** 2.1 differentiability of one variable** since 1.8 doesn’t give you any mark!

Now, our journey is begun. Take off, if you scared! We’re going to learn about differentiability(real stuff of calculus).

Before jumping into R^n let’s talk first of R.

What was the differentiability of R? (Something that we did for the introductory calculus class.)

The formal definition is like below.

So let’s call the red line as l(x)(l(x) = mx +b) and the blue line as f(x).

Then when l(a) = f(a) (the first point).

Then l(a) = ma + b and f(a) = ma + b.

Then b = ma – f(a).

Then we can make l(x) with f(a) by using some simple trick.

l(x) = m(x-a) +f(a)

So the gap between l(x) and f(x) at each point is

f(x) – l(x) = f(x) – m(x-a) – f(a)

‘x-a’ means how x is far from the a. So if it’s far then the gap(we call this Error) will be big.

So we want to focus on ‘x-a’ and want to see when we far some amount then what’s the error?

So let’s x-a = h (since it’s our focus)then let the total gap be a function of h, E(h) (error function) ; some outcome generated by putting h. it will be huge if the gap(h) is huge.

So E(h) = f(x) – l(a) (total gap or total error) = f(x +h) – mh – f(a)

So E(h) + f(a) + mh = f(x+h)

Yes, it’s like that above.

By saying the f is “defferentiable” is if we minimalize h enough then we can approximate the value near a by using a linear line. That means, When h -> 0, f(a+h) -f(a)/(h) = m for some constant m. And that means E(h)/h should go to 0.

The reason why is f(a+h) = f(a) + mh + E(h)

f(a+h) – f(a) – E(h) = mh

{f(a+h) – f(a) – E(h) }/h = m

To be {f(a+h) – f(a)}/h = m, E(h) /h must go to 0.

m is the slope of the linear line but we call it as “derivative of f at point a or f ‘(a)”

To be E(h)/h -> 0 when h->0 E(h) should be o(h) “little o of h” as h->0.

The formal definition of “little o” is like this.

The explanation of E(h) from the text book is like this.

SO now we talk about derivative and what we can do with this?

We can get the local maximum or local minimum!

and 2.5 proposition is related with the Rolle’s theorem

So given situation; continuous on [a,b] and differentiable on (a,b), there should be c such that f ‘(c) = 0 and by the proposition 2.5 it’s either local maximum or local minimum.

The reason why the interval for the continuous is not open is that, this below case can happen

Another theorem is “Mean Value Theorem”

It’s called a “Police theorem” sometimes as I remember arccording to my 137 prof

Because, if you drove your car for 10km in 6 minutes then the average speed is 100 km ,that means there’s a point that you speed 100km! So if the police know this theorem, then you will be caught!

What else we can derive information from the derivative?

Theorem 2.8 gives me 3 things that we can derive..

b is simple enough but ‘a’ and ‘c’ involves some proof works. So I might later post some proof of it.

Okay, Okay. We’ve done this in MAT137( the introductory calculus course)!

Any new stuff in 2.1?

There is! It is called as “Vector Valued functions”.

What does that means?

I’m talking about a function which looks like this!

f= (f_1, f_2,…………….)

we can define the derivative of this in the same manner for each component in f

so f ‘ = (f ‘_1, f ‘_2,…………….)

So all the component should be differentiable.

So what’s the difference between “Vector Valued functions and just a simple function in R?”

There is some theorem that we can’t generalize from a function in R to the Vector Valued function.

Like Rolle’s Theorem ,since it’s saying there is f ‘(c) = 0 but come on f ‘ (c) is a still vector valued function so it can’t be any number in R.

So when we’re dealing with the Vector Valued function it’s better to be-careful of this.

2.2 Differentiability in Several Variables.

So we talked about functions in R -> R and very little bit of functions in R -> R^n (Vector Valued function). For this chapter, we will talk about R^n -> R.

So like this f(x,y) = x^2 + y^2. It’s going to make a bowl like this!

But how would you differentiate the bowl at one point? There are too may ways to approach to the poin. Also from one point to another point the graph doesn’t go really approaching in an efficient way. It can take detour for some case. My prof actually act like a drunken person to explain the way of approaching to a point in this kind of graph.

So from the point of A to B if we keep increase x and y to approach B, then the path might be like the blue line, but if we only consider to the points that approaching B, it will be red dots.

This is all happened because there are too much variables that is uncontrolled. So the way,we do is take the partial derivatives.

The way to take partial derivative is like this.

You just do the derivative only to the respective of jth element.

The common notation is like above.

Last two are commonly used.

But What’s mean by the f (R^n -> R) is differentiable?

If all the possible X_n is differentiable, then it’s differentiable?

Actually, No.

Take some example.

So if you took partial derivatives at 0, f ‘_x = (0,0) and f ‘ _y= (0,0)

So does that mean f ‘(x,y) at 0 is (0,0)?

No, it’s impossible to have a derivative at 0 for this function because it’s not continuous.

The graph for this function for 0<x<1 and 0<y<1 is look like this.

So if you take a derivative respect to x or y it’ will give you 0, but it’s not quite satisfy whole information of derivative of f at 0 generally. (Because it’s different by the direction) Note: G stands for the gradient.

So what condition should holds for f to be differentiable at the point of a?

So some neighborhood of a should have partial derivatives and continuous at a.

Why..?

It’s because of the fact that we need to use Mean Value Theorem for some neighborhood of a and us the continuity of the point to prove f is differentiable at the point of a.

Unless you see the proof, -at least me- it’s hard to capture it.

I’ll post the proof soon. (Some proof are done with my hands but hard to find some time to scan those things and post it, actually)

We call the assumption part of 2.19 as the Calss C^1 or C^1.

C^1: A function f whose partial derivatives all exist and are continuous on an open interval S is said to be of Class1 on S. For short, “f is C^1 on S of”

and 2.17 and 2.19 says

If f is C1 => f is differentiable => partial derivatives exist.

The other way doesn’t work.

We checked that even if there is a partial derivatives it can’t be differentiable.

But is there any example where the f is differentiable and C1 doesn’t hold?

Simple example would be

f(x) = 1/2*x^2 (x >= 0)

f(x) = -1/2*x^2(x <0)

Okay, let’s move on.

So, up to now we talk about differentiability of f

When f is differentiable at a, if we zoom the function f at a a lot, it should looks like a linear line.

But, have you heard of differential?

Differential is the difference of two values f(a+h), f(a).

We talked that f(a+h) -f(a) = mh + E(h) where m is gradient of f and when h is differentiable at a.

E(h) -> 0 enough so that we can neglect it.

mh is ,in the idealistic situation, it’s a difference between f(a+h) and f(a) on the hyper plane but

realistically it’s a linear approximation on the tangent plane.

So we talk about partial derivative and all the differentiability stuff.

differentialbility is including all the directions but what if we want to know the differential in the respect of the certain direction that is not axis?

So let’s look at point a, when we get the partial derivative, we fix the y to the some number and we get the derivative of x but wheat if we want to get the derivative for direction 1, 2, 3 and 4?

Then we now use the “Directional Derivatives”

Let’s look at the definition of it.

So it’s essentially same as f(a + h) – f(a)/h when h -> 0 but

instead of h, we have a unit vector that has an information of direction that we want to go and t is just parameter to make the difference to 0 like h-> 0.

So if you put u as (1,0,0,0…..all zero), then you’re getting a partial difference d1f.

because a + (h1, 0,0…) =a + tu = (a1 +h , a2, a3……….) and |h|= |tu| = t|u| = t *1 = t

So now we know how to get derivative in the respective of a certain direction that we want.

But what’s the relationship between differentiability and directional derivative?

There is theorem 2.23 about it.

Isn’t this amazing proof? Because it’s saying if you have a derivative of f at a then you can get any directional derivative by a simple calculation!

The reason is pretty simple, I’ll post the proof later.

The interesting stuff for this night is this,

Since by 2.24,

So by this, it means the derivative of f at a is the derivative of direction that has the deepest direction or radical direction.

So Now, I think it’s time to start our journey to 2.3

It’s about Chain Rule

Chain rule is such an easy idea. If you understand how the differentiation works then you will know it’s all same.

You just have a long “chain – like” input.

There are only one version for single variable function R -> R but,now, we need know several version of it.

1. Z = f(x,y) and ,in turns, x and y is g(t), h(t).

This is very similar with the differential formula

Because differential is just the difference and we want to see the ratio of the different from the perspective of t.

Can you here that the formula is talking to you?

(For me, Not really.)

I was just imagining, my prof would ask like that kind of the question.

The proof is all about substitution. Today my prof said that the proving is translation for 3 times.

Let’s take some example of this.

For this kind of question, you need to figure out what type it is included.

x = g(t) and y =g(t) and z= f(x,y).

So It’s the first case that we just saw.

Did you get the answer..?

It’s 6.

Let’s take a look at the chain rule #2.

The best way to understand this is like below.

So just think like it’s kinda cogwheels that work together.

We call s and t as independent variables.

We called x, y as intermediate variables.

And at last, z is the dependent variable.

We just looked at where there are 2 independent variables and 2 intermediate variables.

But, what if there are n independent variables and m intermediate variables?

It’s just generalization of the 2nd version.

(C)

Since x_1 is function of (t_1, t_2, ….)

Hence the dx_1 is equal to like below.

(A)

and w is the function of (x_1, x_2, x_3….) hence dw is like this

we can substitute x_j with something of t like (A)

Then we got (C).

So let’s back to (C)

if we take the partial derivative of w with the perspective of t1 then since dtn for n != 1, is all 0.

So only terms with dt1 survive and dt1 is deleted because we’re dividing with it. and

it will be like this below.

and the thing in the box is exactly looks like

Let’s take a look some example.

This is the tree for the thing below

And this is the tree for below.

Derivative of a function with mixed level

So how can handle this?

We can handle in this way.

Just think t as **“u(t) = t”**

then it’s like this!

But this is problematic if there is one more independent variable. Like below case..

It’s valid way to get a partial derivative, but we need to come up with the different notation to differentiate notation to differentiate the green circle and the red circle.

Red circle we can actually write on like dw/du where u(t) = t but..

Yeah. Text book is introducing a new notation, which sucks.

So let’s take a look at…

That looks super ugly I thought…

and the text book also agreed..

So what if super messy and complex function came up to you and want you to solve it..

Well, there are some process that might help you

So it’s the same process that we did.

In the text book, there is a theorem (2.36) called Euler’s Theorem.

It’s really easy to prove but I don’t think it’s much related to the subject in this chapter.. But I guess we can still use this idea with the fact that we’ve learned about multivariable calculus.

Since we’re learning about Topology also at the end of this chapter, the textbook is talking about the topology part.

This is my reasoning about the topology part.

Today, my prof talked about this.. but I didn’t get this stuff. Now I can say I understand.

**Note: The things that I wrote above might wrong. It’s just for self reference. (for advanced calculus course course in U of T) If you found something to correct just let me know.