Wednesday 2 April 2014

Sorting and Efficiency

 The last couple of weeks we have been working on classifying sorting algorithms in terms of their time complexity. What we are doing when we calculate the time complexity of an algorithm is seeing how the run time of the algorithm changes as function of the length of the input. An important thing to consider when evaluating the time complexity of an algorithm is: what is the worst case scenario that can occur when running the algorithm? This worst case acts as an upper bound for the time complexity of the algorithm in general. Of course, knowing the best case scenario of the algorithm can also be very helpful. For example, suppose we are sorting a list of objects and that we have some information before the sort about the structure of the lists. It may be the case that out of all the algorithms we are considering implementing, the one that has the best time complexity in terms of the worst case scenario actually performs poorly when the lists are structured in the way that the lists we are working with are structured. Moreover, one of the algorithms that has the worst time complexity for it's worst case scenario could actually outperform all the other algorithms we are considering when the lists are structured in the same way as the lists we are working with. For this reason, it is important to know when your algorithm will perform well, and when it is less optimal then other options. Let's look at an example of one of the sorting algorithms we've seen in class and go through how we determine the worst case scenario of this algorithm. We'll take a look at insertion sort. Each time insertion sort carries out a cycle, it compares the current element with all the elements that precede it in the list until it reaches an element that is smaller than it, in which case the element is question is placed after this smaller element in the list. When considering the worst case scenario of this algorithm we must ask ourselves, what is the greatest number of comparisons that the algorithm could make each time it completes one cycle of the algorithm, and how many elements in the list will we have to swap so that the element in question can be placed in it's appropriate place. After a moments thought, it is quite clear that if we are currently working with the the ith element of the list, the worst case would be that we have to compare this element to the i-1 elements before it, which occurs when the  ith element is smaller than all the elements before it. In this case we would have to move all of the i-1 elements ahead one position in the list to make room for the ith element at the front of the list. If we had to do this each time, then a list of length n would make n(n-1)/2 comparisons and swaps, which corresponds to a worst case scenario of O(n^2), since n(n-1)/2 is a quadratic equation in the variable n. However, if the list is already sorted then we need only make n-1 comparisons if the length of the list is n, which corresponds to a best case scenario of O(n), which means the run time of the program grown linearly with the size of the input.

Sunday 2 March 2014

Recursion

 Recursion has been a pretty important part of the course so far, with many different examples being given in class. I had seen recursion before entering this class, specifically while studying things like the Fibonacci sequence of numbers and fractals. I think this exposure made it a little bit easier for me to feel comfortable with taking the leap of faith sometimes required when implementing a recursive solution. I didn't find that the idea of recursion was too difficult to grasp, however it did take some time for me to be able to get familiar with the way that recursive solutions are implemented. I think the most challenging part of implementing recursive solutions when first starting out is keeping track of what your function is returning as it goes deeper into recursion, and making sure that you are accumulating this information appropriately. One thing that I found definitely helped me understand this better was simply tracing very small test cases and writing by hand what happens at each step. Once you understand how your recursive call works for very small cases you can begin to notice a pattern, and from there it isn't too hard to extrapolate to larger test cases. Practicing in this manner, or using the Python visualizer in a similar manner, was a crucial part in developing my skills in actually applying recursion effectively, as opposed to just understanding it theoretically. I find the more familiar I become with recursion, the more I notice it's potential.

Thursday 23 January 2014

Object-Oriented Programming

 These first few weeks in CSC148, a computer science class at the University of Toronto, we have started to talk about some of the basics of object-oriented programming. When I think of object-oriented programming, I think of it as a way of structuring a program so that it can be easy for other people to understand, as well as easy for them to use without knowing exactly how each method accomplishes it's task. When I am programming in an object-oriented manner, I try to think of the problem as if I were actually carrying out each step in real life. What would I need to do to get started? What actions must I carry out in order to achieve this task? These types of questions usually make it quite clear what the important 'objects' in the problem really are, and what kind of attributes these objects have that make them important parts of the problem. These 'objects' are typically represented in the program by instances of a class, and the attributes of these objects would be instance variables and methods of the class. For example, say I wanted to make a program similar to the board game Risk. I might ask myself, what would a typical turn look like? What actions or decisions would each player have to make during a turn? How would I need to set up the board to get started? In thinking about these questions, I might decide to make a class to represent a player. This class would have methods that allow the player to attack, and to reinforce occupied territories, as well as variables that represent the current positions that the player holds, and how many cards they currently hold. As long as someone knew how to play Risk, these methods in the player class would be relatively straight forward, and if they wished to improve the game in anyway they could use the methods in the player class without having to know exactly how each method is executed. To me, this is what seems exciting about object-oriented programming. It is about taking the real world aspects of the programming problem, representing them through abstract data types, and using them in an intuitive and organized way to solve the problem much like you would if you were not using computers. It definitely has a very natural feel to it.