Tuesday, April 1, 2014

Week 11 - Sorting and Efficiency

A sorting algorithm is basically placing elements of a list in their correct positions. Sorting can be both in ascending and descending order. In computer science there are many sorting algorithms that makes this job but with different runtimes and of course different efficiencies. There is this famous question about sorting algorithm; "What is the difference between sorting types?", for example; what is the difference between insertion sort and selection sort? Indeed, there is huge difference between these two sorting algorithms. In the best case(list is sorted) insertion sort takes n iterations for the input size n whereas selection takes n^2 iterations for the input size n. Consider a list with 100.000 items in it. For the best case, insertion sort iterates 100.000 times whereas selection sort iterates 10.000.000.000 times. Aim of sorting algorithm is to get this job done with most efficient way, so in this case a computer scientist would prefer insertion sort.

We have learnt some sorting algorithms last semester, for CSC108. However we didn't have any information about boundedness of these algorithms. In CSC165 we are still working on Big-O(upper bound), Big-Omega(lower bound) and Big-Theta(tight bound) which are directly related with these sorting algorithms. I am going to talk more about the Big-O notation. For example let's consider selection sort. In selection sort there are two nested loops and each loop iterates n times. Thus, with the given input size n, selection sort's both worst case and best case performances are represented as O(n^2)(since n*n is n^2). As the Big-O notation states there is and upper bound which is c*n^2, for n^2 iteration, where c is a constant number. There is a table that shows the most efficient runtime of programs. Sorting table: O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(n^3) < O(2^n) < O(n!) <  O(n^n)...
As we move from left to right in the inequality, the runtime gets bigger and bigger which means it gets less efficient.





As we can see from the graph O(1) is the most efficient one. Regardless of the input size, it takes a constant amount of time to run the program which is very efficient. In this lecture we've learned merge sort, another sorting algorithm. It takes O(n log n) for average-case, worst-case and best-case performances. What it basically does is, first it divides the list into two until it reaches the base level, which means to reach a level where it has single element. This takes log n steps with base 2 as it divides 2 for each step. If we consider a list with n items in it, it takes n iterations for each step, thus merge sort takes O(n log n) time to perform.

In conclusion a computer scientist carry about efficiency when the input size is very large, because the difference between small sized input results is usually negligible. If we have millions of inputs to take care then we should choose a sorting algorithm that is most efficient and in order to choose that, we should look at the left side of our sorting table. A computer scientist can save tons of time by choosing the right algorithm. As a consequence, sorting and efficiency is a very important concept that we cannot and should not ignore.

Sunday, March 2, 2014

Week 7- Recursion

Basically, recursion is used to solve larger problems by breaking those large problems into smaller subproblems until you can manage to handle them. Recursive functions call themselves.

There are 3 main rules for recursion:

  1. A recursive algorithm must have a base case.
  2. A recursive algorithm must change its state and move toward the base case.
  3. A recursive algorithm must call itself, recursively.






Recursion is used in this example and the base case is the first if statement where n is equal to 0 which means that it is the most 'basic' case(rule #1). If we type factorial(0) in the python shell it will just return 0 without doing any recursion.

For the second and third rule to be applied there should be a change of state, which means that we should go on to the next step (else part). Else part of the function says; return n * factorial(n - 1), thus it calls the function factorial again to solve it. If we take n = 3 and type factorial(3) in the python shell, the else part of the code will be executed since n is not equal to 0, therefore factorial(3) = 3 * factorial(2) and factorial(2) =  2 * factorial(1) and factorial(1) = 1 * factorial(0) and factorial(0) = 1(moving from complex problems to smaller subproblems, moving to the base case). So factorial(1) = 1 and factorial(2) is 2, therefore factorial(3) = 3 * 2 = 6. So here, the function calls itself recursively to reach the base case. 

In conclusion recursion makes programmers life simpler. Although it is hard to understand, a programmer should learn how to use it to save his/her time. 

Thursday, February 13, 2014

Getting Into Recursion

For the last weeks we've seen some bunch of code related with recursion but this week we started getting into it. For a very quick explanation, recursion is a very useful tool used in programming languages in order to solve a problem. Functions call themselves to solve smaller subproblems. We use recursion for our first assignment. In this week's lectures our lecturer showed us the function move_cheeses, and we traced it. move_cheeses(n - 1, source, destination, intermediate) is an example of recursion in a function.

Week- 5 Lab:

In this week's lab we dealt with recursion as I mentioned above. First, there were two functions that we had to trace before getting into writing our own recursive functions. Thus, we traced the GCD, Greatest Common Denominator. This function uses; csc148_gcd(n2, n1 % n2) if n2 > 0 else n1. Therefore csc148_gcd(15, 35) gives us the result 5, since 5 is the greatest number that can divide both 15 and 35. Afterwards we traced Binary Representation just like GCD. Then we wrote our own recursive function for SearchList to determine whether a given integer is somewhere in the SearchList or not.

Sunday, February 2, 2014

Recursion - Inheritance

In this week's lectures we learnt 'recursion' which is a very useful tool for almost every programming language. It basically means a function to call itself in order to solve a problem and to make the problem easier to solve.  We traced a function called nested_depth to understand recursion. Basically nested_depth returns 0 if L is not a list or it returns the number of nested depth of a list L. Code uses isinstance built-in to check whether L is a list object or not. This is used in classes as well and it is called inheritance. In this week's lab, we worked on classes using inheritance.

Week-4 Lab:

In the lab we created a class called Motorized to represent vehicles that are powered by a gas engine. This class have several attributes and methods related with the motorized object such as; number of wheels, its fuel capacity and etc. Then we test our Motorized class using unittest. Afterwards, we created subclass; Car and Bicycle that inherits the class Motorized. Classes Car and Bicycle have the same attributes with the class Motorized. After that we again tested our results and finished the exercise. 

Thursday, January 23, 2014

Object-Oriented Programming

    In the first semester, we covered the fundamentals of programming such as; for/while loops, function definitions, lists, dictionaries and so on... We finished first semester by learning "classes" which we can do whatever we want, its up to programmers needs and imagination. Python is a object oriented programming language which means that it provides features that support object oriented programming. By the definition 'object-oriented programming was developed as a way to handle the rapidly increasing size and complexity of software systems, and to make it easier to modify these large and complex systems over time. For example in an object oriented programming language we can create a class called "Restaurant", to specify each restaurant according to some categories such as; its price, quality, cuisine, location and popularity. Class "Restaurant" provides and lists all the possible choices from the given method. If a client wants to eat Sushi, then he/she should look for a Japanese Restaurant under method 'cuisine'. If someone wants to go to the most popular restaurant around, then he or she should look for the method 'popularity' and etc. In conclusion, object oriented programming increases understanding because the language spoken by programmers and the language between users are minimized.