In almost every computation, a variety of arrangements for the processes is possible. It is essential to choose that arrangement which shall tend to minimize the time necessary for the calculation.<\/p><\/blockquote>\n
\u2014Ada Lovelace<\/p>\n
How much time does it take to sort 26 shuffled cards? If instead, you had 52 cards, would it take twice as long? How much longer would it take for a thousand decks of cards? The answer is intrinsic to the method<\/strong> used to sort the cards.<\/p>\nA method is a list of unambiguous instructions for achieving a goal. A method that always requires a finite series of operations is called an algorithm<\/strong>. For instance, a card-sorting algorithm is a method that will always specify some operations to sort a deck of 26 cards per suit and per rank.<\/p>\nLess operations need less computing power. We like fast solutions, so we monitor the number of operations in our algorithms. Many algorithms require a fast-growing number of operations when the input grows in size. For example, our card-sorting algorithm could take few operations to sort 26 cards, but four times more operations to sort 52 cards!<\/p>\n
To avoid bad surprises when our problem size grows, we find the algorithm’s time complexity<\/strong>. In this chapter, you’ll learn to:<\/p>\n\n- Count and interpret time<\/span> complexities<\/li>\n
- Express their growth with fancy Big-O<\/strong>‘s<\/li>\n
- Run away from exponential<\/strong> algorithms<\/li>\n
- Make sure you have enough computer memory<\/strong>.<\/li>\n<\/ul>\n
But first, how do we define time complexity?<\/p>\n
Time complexity is written