Wednesday 3 December 2014

Limits of Computing

It began in the 1930s when a group of talented mathematicians came together to lay the foundations of theoretical computer science.  There work paved the way for the computer revolution, which in turn led to significant breakthroughs in scientific investigation, and in many other areas of discourse.  Computers provided us with the capability of solving problems requiring serious number crunching, and extensive resources.   However, as powerful as computers are, they do have inherent limitations.   In fact, provided with any sufficiently powerful machine, it has been proved that there are well defined problems that cannot be computed in this framework.   One such problem is the Halting Problem.  First proved in 1937, the Halting Problem basically asks whether it  is possible to write a program Halt(function, input), which returns True if function(input) eventually halts and returns False otherwise.  This is impossible to do.  The proof was discussed in lecture, and an alternative version can be found at http://wally.cs.iupui.edu/csci230/files/t16Haltingproblem.htm.

Tuesday 2 December 2014

Problem Solving Episode: Penny Piles


What is the Problem ?: Suppose there are two drawers, where the left one contains 64 pennies and the right contains none. We are asked to determine whether it is possible to rearrange things so that one drawer has exactly 48 pennies using only two given operations.

L:  If the left drawer has an even number of pennies, transfer half of them to the right drawer (operation is disallowed if left drawer has odd number of pennies).
R: If the right drawer has an even number of pennies, transfer half of them to the left drawer (operation is disallowed if right drawer has odd number of pennies).

Understanding the Problem: In the problem, we are given: 1. The left drawer contains 64 pennies,  2. The right drawer contains no pennies, 3. Allowed to perform only two basic operations  L and R.  In the problem we are required to determine: If by using only L and R we can arrange things so one drawer has 48 pennies.  

Devising a Plan: We can create a table containing two columns representing the number of pennies in each drawer respectively.  As a starting point, we will have the left drawer containing 64 pennies and the right 0 pennies (by 1 & 2).

Since the left drawer initially contains an even number of pennies we can continually apply L, until the right drawer contains 48 pennies (by 3).

Carrying Out the Plan: 

Left Drawer                       Right Drawer
64                                        0
32                                        32     #applying L
16                                        48     #applying L

Review: From above, it is clear that the implemented plan gave us the correct solution.  After applying the L operation twice, the right drawer ended up with 48 pennies, as we desired.  In addition, there doesn't appear to be a simpler way to solve this problem.

Thursday 20 November 2014

More Big Oh

Its time to get a bit more complicated with Big Oh.  In other words, lets actually try to apply the working definition of Big-Oh to proofs.  As a reminder here is the formal definition of Big-Oh as introduced in class.  That is,
  1. A function f(n) is O(g(n)) of a function g(n) if there exists  A\mathbb{N} and B \mathbb R+ such that if n>B then f(n) ≤ Ag(n). This is what it means for a function f(n) to be in O(g(n)).
Now, I'll be the first to admit that the course has gotten considerably more challenging since the start.  Let me elaborate on this point.  Big-Oh proofs, as discussed in lecture, are rather simple.  The approach is systematic for most problems of this nature.  For instance, trying to show that 3n^2 + 2n + 5  O(n^2) is rather simple, similarly to the problem 7n^6 - 5n^4 + 2n^3   O(6n^8 - 4n^5 + 2n).  Now the disproving aspect can be a bit more challenging depending on the mathematical expressions involved.  I personally found difficulty in attempting to solve the Week 10 Tutorial problem 2.  The problem asked to prove 6n^5 - 4n^3 + 2n^2   O(5n^4 - 3n^2 + 1).  I was stuck on this problem for some time, as I could not find a suitable value of n to choose.  However, the Big-Oh proofs involving non polynomials gets even worse.  I understand L'Hopitals Rule, and the process by which we can determine which function grows faster.  The part that confuses me is the definition of the limit involving infinities.  These are areas I need to improve in quickly.  Here are my next steps:
  1. Review lecture notes covering Big-Oh of polynomials, non-polynomials, limits
  2. Work through the tutorial problems for week 10 again.
  3. Work through some of the extra practice problems linked on the course homepage.
  4. Post questions that further confuse me on Piazza. 



Tuesday 11 November 2014

Complexity and Algorithm Analysis

The past weeks we dived into algorithm complexity introducing an array of new concepts including: Big O, counting steps and worst case.  I will begin our discussion at Big O.  Typically Big O is used to compare how two algorithms scale up as the number of inputs increase.  For example, consider the following cases:
  • O(n)  --->  If the size of the input n increases, the runtime will increase linearly. 
  • O(log n) ---> If the size of the input n increases, the runtime will increase logarithmically. 
  •  O(1) ---> If the size of the input n increases, the runtime will remain constant (no change).
There are many other cases to consider, which we won't go into here.  If interested please check out the following link: \http://bigocheatsheet.com.
We also extended Big Oh to determine the complexity of the worst case runtime of the sorting algorithms we considered the previous week.  For instance, in the worst case bubble sort has a complexity of O(n**2), merge sort a complexity of O(n*log n), and insertion sort a complexity of O(n**2).  Fortunately for me, I found this content to be a review of the material presented in CSC108.    



Thursday 30 October 2014

Sorting Algorithms

Sadly, I was unable to post last week due to an incredible amount of studying i had to attend to.  Nonetheless, heres a quick refresher of the material we covered last week.  We wrapped up our study of proofs and basic mathematical structure for proof arguments.  We went over proof problems involving added complexity, such as epsilon delta proofs.  In addition, we covered ways not to do proofs such as proof by intimation or proof by puns.

Just as a side note: in the lecture slides it was shown that the statement "trivial" is an example of proof by intimidation, and should generally be avoided.  However, it seems authors of physics and mathematics textbooks use the term "trivial" on a regular basis.


The next topic of discussion and most of the focus of the past week was sorting algorithms. We began discussing the basic sorting algorithms including merge sort and bubble sort.  Heres a quick overview of the two sorting techniques.

Bubble sort: Its a sorting algorithm that compares the adjacent items in a list, and interchanges them if the are in the wrong order. The python implementation of bubble sort is shown below.

 def bubble_sort(l): 
 """ Algorithm for bubble sort""" 
 for i in range(len(l)):
     for j in range(len(l)-1-i): 
         if l[j] > l[j+1]:  
             l[j], l[j+1] = l[j+1], l[j]

Merge sort: Its a recursive sorting algorithm that use a "divide and conquer approach".  Namely, it divides the unsorted list into smaller sublists, and sorts each of the smaller sublists, and combines everything together to obtain the sorted list.  The implementation is not included here but can be found at http://www.geekviewpoint.com/python/sorting/mergesort.

Tuesday 14 October 2014

Proofs!

With our first midterm in the books, I can finally commence my study of proofs.  Immediately following the midterm test, we began our study of proof techniques.  These techniques range from direct proof, proof by contradiction and proof by contrapositive.
The structure of these proof methods is as follows:

Consider some arbitrary statement P -> Q.  To prove this, we can do the following:

1. Proof by Contradiction:

Proof: Assume P ∧ ∼ Q.  Show some sort of contradiction.

2. Proof by Contrapositive:

Proof: Assume ∼ Q.  Show ∼ Q  ->  ∼ P.

3. Direct Proof:

Proof: Assume P.  Show  P  ->  Q.

I am confident that I understood the majority of content covered in lecture.  However, as with most academic topics, there are aspects that are incredibly easy, and others that are laborious and demanding.  The same applies to proofs.  While there may be ridiculously simple proofs such as showing that for all nℕ, n is odd → n^2 is odd, there are also ridiculously difficult proofs such as 
Euclid's Theorem, which states there are infinitely many primes or something almost near
 impossible (even for the mathematicians), Fermat's Last Theorem.  For those interested, 
heres a nice and simple explanation of Fermat's Last Theorem: 
http://mathforum.org/library/drmath/view/52464.html 

Although, it should be mentioned that the proof is not included, as 
its far to advanced for non-mathematicians.


.

Monday 6 October 2014

Already Midterm Season

With only a few more days until our CSC165 midterm, I've been steadily studying away.  I thoroughly reviewed all the tutorial problems and lecture notes, yet I am still in search for more practice questions.  Thus, I have turned to my trusty sidekick the Google Search Engine.

In lecture this week, we finished our study of basic equivalence rules and began the joyful art of proof writing.  Over the duration of my lifetime, I've seen and wrote many proofs, and never have there been formal guidelines on indention.  The structure for these proofs seems incredibly pointless.  Apparently, in CSC236, this particular format for writing proofs gets abandoned, although I can't verify this claim, as its only hearsay.  Nonetheless, even if its only temporary, this format for writing proofs is definitely something I'll have to gain familiarity with.  Otherwise, I look forward to diving deeper into proofs.

On a personal note, discrete mathematics or more precisely logic has always been a field I've lacked substantial confidence in.  In the past, I've studied elements of logic here and there, but never a complete study of the subject.  That is, until now.  In only a matter of three weeks, I have acquired a solid foundation in fundamental mathematical logic.  I look forward to applying this knowledge in the realm of digital design, namely FPGA's in CSC258.