I just solved this problem with my tablet, so I’ll upload it as an image.

(: That’s it and I’ll upload more problem solving before the coming Wednesday.

Welcome to my chaos

03 Dec 2012 Leave a comment

in illogical review of CSC236(SLOG)

I just solved this problem with my tablet, so I’ll upload it as an image.

(: That’s it and I’ll upload more problem solving before the coming Wednesday.

08 Nov 2012 Leave a comment

in illogical review of CSC236(SLOG)

I searched up Internet and I got a nice information out of it. So my writing will basically, explain the information from it.

The link is -> http://www.cs.ucsb.edu/~suri/cs235/ClosestPair.pdf

————————————————————————————————

It’s a pretty interesting topic in Divide and Conquer(let’s call it DC), I think.

Because, you need to use this to solve many problems.

For example, to make a nice navigation system, you want to know the most fastest way to get to a some point from the present position. Something like that.

There can be many approach to solve this problem.

One of the most naive one will take O(n^2).

For each dot, you will check the distance from it to every other dots.

Can we do better than this?

Yes, by using DC!

Let’s think about the 1-d case first.

First, we sort the point first and divided them as a half.

Second, get the pair with the minimum length from the left and another pair from the right.

So now you might want to compare two things and get the minimum out of it but..

Wait!

There might be a case that the distance of one point from the one side and the point from the other side is the closest pair!

Let’s say we found “%” is the minimum length from S1 and S2. Then, we just need to check the distance of dots in the range of (m-%, m+%) where m is the mid line.

You might say,

“Well… what if there are a lot of dots in the S1 side that are in the range?”

No! there can be only 1…

If there were more than 1 in the one side, well then they have smaller length then % and it’s contradict our observation.

So there is only one in the (m-% m] and another one [m, m+%), So we need to check only two dots.

So the pseudo code is like below.

The running time for Closest-Pair is O(1) and we need to compare all the input and get the mid point so it’s O(n) work, the total running time will be like this.

T(n)= 2*T(n/2) + O(n)

By using the master theorem,

b = 2, a= 2 and c = 1 hence, 0(nlogn).

2-d case is very similar with this, the only reason I’m doing this is for my term test.

So yes, as you can see now it’s 2-d ,but all the same. (Except for one thing, actually)

We just need to make a mid line according to the one coordinate.

Either x or y doesn’t really matter.

The thing that has changed is the number of dots that can be around the mid line, when we need to check them. But don’t worry, the points in the P1 and P2 have a bit interesting feature with them.

To find a point to be the closest pair in the other side, p have two box to check.( i.e., check upside and downside to the left side. )

And the box can only contain up to 6 points.

For one side there can be 2/n points, and for each point for 2/n points there are 6 points at the other side to get the distance from it.

Nasty! Right?

I do agree. CS is very powerful but super Nasty….. :S

But it will pay off… hopefully!

If you ask why there are 2/n points at one side just think, that we divide n points by half

so for each side there are n/2 points and for the worst case they are all in the range of

delt/2 from the mid line.

So the comparing took 6* n/2 hence it’s O(n).

So T(n) = T(n/2) + T(n/2) + O(n)

T(n/2) => get the closest pair from the one side

Okay by the 2-d, we can go for n-d but I don’t want to right now..

This is done for today!

08 Nov 2012 Leave a comment

in illogical review of CSC236(SLOG)

Yesterday, I watched a Youtube video of a Statistics Introduction class in Harvard. A prof in the video said, “The best way to solve any problem is divide to the small piece of problem and work it out from them and combine later on”.

Yes, I do agree. Bit by Bit.

So the algorithm, I’ll talk about is totally same thing.

You just divide the problem into few pieces and solve and put them back.

There are few interesting problem, we can look at it.

1) Multiply lots of bits

In our computer, we can’t store enormous size of integer due to the limitation of storage problem.

So the program was multiplying two integers and you got 2 integers with huge length, how would you treat them?

You can divide and calculate small amount and store it first, and precede your work till you reach to the end. At last, you can just combine the all little solution in to one piece.

Let’s get to the detail.

Let’s say your computer only store 2 integer like 11, 22, 01 at one time and You want to compute the multiplication of the above.

How would you do that?

First, you break 1101 to 11,01 and 1011 to 10,11 and you have 4 pieces.

And multiply them by units and put them in their position later.

So the formula for doing that is like above.

n is 4 here because it’s 4 bit integer. i.e., 1101 is composed of 4 integers.

You might feel frustrated when you looked at 2^n and 2^(n/2). This is just messy stuff. I know. But just think since x1 actually is 1100 = 11 * 2^2 ( 2 = n / 2 ) and y1 is actually also 10000 = 10 * 2^2

but we treat them as if they are 11 and 10. We need to pay back what we borrowed from them.

Since we borrowed half and half and after the multiplication it became n (b/c n/2 + n/2 = n)

So it became 2^n and x1y0 we only mistreat x1 here and for y1x0 we only mistreat y1 so we need to pay 2^n/2 here.

It’s like that.

Seems darn boring. But there’s an interesting trick with this.

It’s called “Gauss’ trick”.

In the original image

we need to compute 4 things but by using the trick we only need to compute 3 chunks.

So if you have 2 n-bit integers, you need to n^2 multiplication but by using the trick, you need to work less.

That’s sort of saving but not much because you need to manipulate some code more and it just only make one multiplication less.

Well, sounds bit boring. Hmm.

Well. Next page might be more interesting I guess.

In the next problem, you will try to look for the closed pair of dots.

Sounds more fun?

Yay………………………

03 Nov 2012 Leave a comment

in illogical review of CSC236(SLOG)

I’m so lost in the lecture of correctness.

We didn’t go through to the end (to the point about loop invariant) but I felt bit lost.

Actually I wonder whether it’s practical because it takes too much time for even one merge sort to prove something. Well, anyway let’s start. (at least it will give me a mark)

Well, for me, if I think about “correct” the first thing comes to my mind is AUTOCORRECT lol.

LoL, but it ,of course, means something different in Computer Science.

Program is a function. You give a input and it gives you a out put.

Let’s say it supposed to get sugar from you and gives back a candy to you.

But what if you put salt by mistake? Even if I can get a candy from the program, I don’t think it’s worthwhile to try.

So, here we go, **Precondition **is correctness of input.

You must give it to sugar, not salt nor white beach sand.

So if you give the correct input, you must get a correct output which is a candy.

But what if you got a …report card that trashed? It might remind of your old days.

Oh, that’s no good.

So the correctness of output is PostCondition.

So it looks like we got everything that must be “correct” when we’re running a program.

But, what if after you put a sugar and you waited but it doesn’t come out?

Like jammed printer.

How much time are you going to wait for a silly candy?

while sleeping?

The program must exist some point (i.e., it must give me a candy!)

So, the correctness includes the part of termination of program.

So by means that the program is correct is postcondition, precondition and termination should holds.

Easy? right?

This is the basic of correctness!

26 Oct 2012 Leave a comment

in illogical review of CSC236(SLOG)

Let’s make a pause on the our bad soap drama for a moment and Get some course materiel.

Why? It’s simply because I need to study it for upcoming Assignment. Moreover, It’s a pretty interesting topic.

By the way, have you ever worked in the factory?

Well, I have for a day. (It was a day job which was darn hard. My mom wanted me to earn money by myself after I graduate high school, so I had to work if I need money.)

As I remember from the day, how factory works is pretty close to the algorithm of “Divide, Conquer and Combine”.

“Divide, Conquer and Combine” is an algorithm to solve some problem that we can somehow infer some “recursive sense.”

The way algorithm work is simple. If you found a messy problem. Don’t try to deal with it by one trying. Divide the work to several small piece and Conquer the pieces one by one and combine them nicely. Sounds like factory?

What factory does, they divide work by type and also divide people for each work. Then they make each group conquer specific work that assigned to them and after done that they combine the work to a “Product.” So in a car factory, some group of people are taking care of wheels and others paint the cars.. after they done that they put all together to make a car.

That’s how divide, conquer and combine algorithm works.

Well, wait a minute what is a recursion?

Recursion is very simple and fun idea. It’s essentially doing same thing over and over.

So it’s like you go in to a room by opening a door but there is a door inside and you go in to another room. and..so on. You’re keeping doing something over and over.

Have you ever draw something when somebody was talking to you?

Well, I’ve never done that but when I was having a lecture…yes.

I’m pretty sure that I draw something like.

I drew a big rectangle and drew one in the inside….keep “same thing” over and over.

That’s recursion.

Can you imagine when I’ll finish my job of drawing the rectangles?

Well if there’s no space for me to draw then I’ll finish it.

But careful! Some recursion can be finished and some can’t be. Like this function below.

Endless_love(i = 1, String your_name){

Print “This is %s time(s) that I was born and fall in love with $s” , % i %your_name;

Print “My life is done and if I reincarnate, I hope meet %s again!” % your_name;

i++;

Endless_love(i, your_name)

}

If I call my Endless_love function(aka stalker function), like this Endless_love(Adam) will print out like below.

This is 1 time(s) that I was born and fall in love with Adam

My life is done and if I reincarnate, I hope meet Adam again!

This is 2 time(s) that I was born and fall in love with Adam

My life is done and if I reincarnate, I hope meet Adam again!

This is 3 time(s) that I was born and fall in love with Adam

My life is done and if I reincarnate, I hope meet Adam again!

and so on Endlessly.

Yup this is a bad example of recursion because of its cheesiness and also it can’t be finished.

By the way, “What is a problem that we can solve using recursion?”, if you ask.

It’s just a problem such that you can divide by pieces and the pieces keep the properties that it used to have. Hard? Think about a “Tofu”.

You can divide the tofu by pieces. After you divide them, I dare to ask you what the pieces are.

“Well, it’s a still tofu”

Yes. That’s recursive problem.

Can’t believe me? Well, I can give you a more boring example which sometimes people found more make sense and logical.

Let’s say you have a function that find a maximum element in an array. So below is the pseudo code for it.

Get-maximum(array A){

if A.size == 1:

return A[0]

A-half1 = A[: A.size *1/2];

A-half2 = A[A.size*1/2:];

return max{ Get-maximum(A-half1), Get-maximum(A-half2) };

}

By using this code, you can get the maximum element in an integer array. By writing this code, I just showed you how the getting maximum problem can be recursive problem. The reason why it can be recursive problem is when I chopped the array in the code as A-half1 and A-half2, the two pieces are still array like A. (Like half of tofu is still a tofu). Hence I could call the Get-maximum function recursively on those pieces.

Since we can solve the problem recursively, we can set a formula for the running time for this kind of problem also recursively like blow.

case 1: T(n) = aT(n/b) + f(n)

or

case2: T(n) = aT(n/b) + cT(n/c) +f(n)

Well, there can be more cases most of time in csc236 I think it’s enough if we can deal with those cases.

So case1 is there are “a” sub-cases that has size of n/b and for case 2, there are “a” sub-cases size of n/b and “c” sub cases size of n/c that’s it.

Solving a problem using the DCC(“divide, conquer and combine algorithm”) algorithm is useful but not hard. So it’s great. Let’ me move on to the master-theorem, after I explain what is the master-theorem, I’ll give you an example that mix the”divide, conquer and combine algorithm” and the master theorem.

Well, master..? What it reminds to me is..

Kung-fu master?

In the recursion world, it’s has similarity with this cute panda.

First of all, it’s a kick-ass, you don’t need to even think, if you have this theorem with you. Secondly, it’s not perfect. There’s a limitation, like our cute panda is not always smart and sometimes bit clumsy. (but still they are kings in their world!)

Let’s see how our master for the recursive world looks like.

If we can express the running time formula for our recursive problem like this,

T(n) = aT(n/b) + f(n) where a>=1 and b>1 then we can use the master theorem to get the T(n).

The limitation of the theorem is the running time formula should be in the form of the one that I introduce above.

f(n) = Θ(n^d) then

Θ(n^d) if a < b^d

Θ(n^d* logn) if a = b^b

Θ(f(n)) if a > b^d

So you can get the running time of the recursive problem very easily with this formula.

Here’s a link for the proof for the theorem that I’ll read when I have a time so read if you have a time!

http://www.ugrad.cs.ubc.ca/~cs320/2010W2/handouts/MasterTheoremProof.pdf

19 Oct 2012 Leave a comment

in illogical review of CSC236(SLOG)

[This is my illogical approach to prove the chain-like relationship between Well ordering principle, Mathematical Induction, and Complete Induction]

Let’s break the bad soap opera things.

WO => MI => CI => WO

I know, It’s boring.

So let’s called it differently.

What about “Wolverine” for WO, Mike for “MI” and Cinderella for “CI”.

Okay, then Wolverine likes Mike, Mike likes Cinderella and Cinderella likes Wolverine. Oh! Well, the names came from my subconsciousness, so I found out the sexuality of Wolverine now. Well I’m a gay-friendly person. (:

I read about how Wolverine become love Mike(get our secrete code!) and found out it’s bit tricky to explain. (Well, all love is complicate), but well then let’s ask to Wolverine how he grew his love for Mike.

Me: Hi, Wolverine

Wolverine: Hi, grr.

Me: I have a personal question to you, if you don’t mind.

Wolverine: What, grr?

Me: How become grew your love for Mike?

Wolverine: Hmm Well because I found out that my life philosophy can well support the Mike’s. Well, as you know, my philosophy is that there always ‘the smallest thing’. (In any subset of positive integer.)

Me: (Is that only reason that you like someone? Weird) I know.

Wolverine: So and you know that mike has a Philosophy of domino. It saying that as long as all the domino are in a nicely consecutive order, like one after another, with very nice spaces between them and if you find out that any two domino with that space between fallen down together, eventually after you pushed the very first domino, BOOM! “ALL” domino will fall down.

Me: I know which is I don’t think so. Sometimes my domino stops in the middle.

Wolverine: I said as long as you have “a very nice space” between you always put domino in a very unorganized way! His idea might be little bit complicate to you but the point is Eventually All domino will fall down so if you check some property then you don’t need to wait until all domino is fallen down.

Me: No, Way! That’s the most fun part! Anyway, so why the hell your philosophy support Mike’s?

Wolverine: Listen, like you said Let’s assume that Mike was wrong, so even though we put domino in a really nice way ,but somehow we can’t make all the domino fall down and also assume** that we wrote numbers on all domino while we’re counting them in order.(i.e., so 1 is written on the very first domino, 2 is written on the domino right behind the very first one and so on.. )**

Me:(Yes, there we go)

Wolverine: Just to make sure, it’s an assumption that we will get a contradiction to show that Mike is actually right. How mike can be wrong?

Me:(pfts, I hate people fallen in love! They’re all blind..)

Wolverine: Then there are some domino that are not fallen down yet. By my philosophy, there should be a domino with the smallest number written on it. Let’s call it “P”. Since when we started domino, we pushed the first one down, the domino with 1 written on it already fallen down. Hence, P is bigger then one. Then we can find a natural number k that is just 1 smaller than P.Let’s call it P’. Then P = P’ + 1. Are you following me?

Me: ZZZ..Uh what? Keep going! My reader will understand you… Hopefully.

Wolverine: Oh well, okay anyway. Then P’ should be written on the domino that has fallen down because P is the smallest number that hasn’t fallen down.(P = p’ + 1) But I randomly select 2 domino and put them nicely with the same distance that I had P and P’ and I found Both are always fallen down. So that means..P’ has fallen down so that P BETTER BE fallen down. So there’s a contradiction! That means P was actually fallen down. By my philosophy, any non empty set should have the minimum but the set of domino that hasn’t fallen doesn’t have a minimum! Because the domino that has the minimum number P is actually fallen down!! So what that means?

Me: curse, I don’t know. It’s because of some witchcraft things or ghost?

Wolverine: Hey silly, It means there is **NO DOMINO** that has’t fallen down! So the set of domino that hasn’t fallen down is actually **EMPTY!!** Because they can’t have the** MINIMUM!**

Me: so What that means?

Wolverine: That means as long as my idea and the assumption that mike have(Any two domino with the nice space are actually fall down together) true, if you pushed the first domino down then **BOOM!** Eventually, you got all domino fallen down on the floor!! In other words, MY PHILOSOPHY absolutely support MIKE’S.

Me: Yeahy….Well That’s very interesting fact..You guys are** the conqueror in the domino world! IMPRESSIVE.**

Wolverine: Don’t be sarcastic, silly.

Me: Well, anyway thanks. That was the most longest talk about domino that I heard. I got to go to Mike!

Bye.

…To be continued.

**NOTICE: All pictures that I’m posting with my writing are not mine. It belongs to the someone else that I feel lazy to type their source.

19 Oct 2012 Leave a comment

in illogical review of CSC236(SLOG)

Yesterday, to be honest, I skipped the class because I was hurry to finish my 263 assignment.

So today, I looked at the slide to catch up yesterday class and there is one line that I just like “Huh?”.

That is actually an well known fact..

**“WO => MI => CI => WO”**

Which is Well ordering principle implies Mathematical induction and Mathematical induction implies Complete induction and agin it implies Well Ordering. It’s like “the most BAD soap opera”. A likes B and B likes C and C likes A.

But also it reminds me of kekule’s dream of benzene but well it’s side story.

Just side story: Kekule,a chemist in the old days, had dreamed of a snake that looks above, and he got the idea from it and discovered the shape of benzene molecule.

I heard of this fact several time but still I don’t get it. The best way to get the fact will be proving it.

However since I hate proof….proof is,for me, just the worst way of convincing somebody. They’re aggressive, assertive and boring!(to me). So why not we find some other way to understand it or convince our self?

Before we get to the implication part, Let’s talk about what is “Well Ordering”. Well…..ordering?

Well.. to say something is ordered, for me, it’s okay to say it’s well ordered if i can line up all the elements nicely.

However, In mathematical sense(which I don’t like), they are saying that we MUST find “one least element”.

So this is what it means by well ordering principle in mathematics and it’s from ,of course, Wiki.

In mathematics, the

well-ordering principlestates that every non-empty set of positive integers contains a smallest element.

Blah, Blah,Blah. It looks like.

So It is saying that we got the smallest one in every set of positive integers. Let’s think about our height! (Don’t be embarrassed! I’m like in a hobbit size.)

It’s absolutely positive integer.. (Is there anyone who has the height of -1990 cm or 0 cm?)

and let’s think about your FAMILY!

WHY? Well, your family is a subset of all people in the world.(YES, you guys are subset in mathematics and sample in statistics, That’s why I hate this things. They are always trying to measure anything. They are measure crazy. )

So Hey you(the Element of the subset), can you name the smallest one in your family?

Of course you can!! Yup, that’s it.

That is Well ordering principle.

So there are 3 chains now, that we need to prove

- Why WO => MI
- Why MI => CI
- Why CI => WO

and why the hell we need to bother this things?..

Well, before jump into that let’t take a break and in the next post Let’s break one by one.

Because, I feel too lazy. lol