Lesson 8 of 8
Algorithms: Lesson 8

Evaluating Algorithms

The final lesson: bringing it all together. Learn to compare algorithms objectively using Big-O notation, apply a structured framework for choosing between search and sort algorithms, and tackle the most challenging exam questions in this series.

55 minutes Interactive algorithm selector and complexity chart

Knowing how to write a sorting algorithm is useful. Knowing which sorting algorithm to use - and being able to justify your choice with evidence - is what separates good software design from great software design. In real development teams, algorithm choice decisions are made by arguing from efficiency data, not from gut feeling. This lesson gives you the conceptual tools to make and justify those decisions.

Think about it: You have three algorithms that all produce the correct output. Algorithm A takes 1 second on 1,000 items but 1,000 seconds on 1,000,000 items. Algorithm B takes 2 seconds on 1,000 items but 40 seconds on 1,000,000 items. Algorithm C always takes exactly 0.02 seconds regardless of size. Which would you choose for a real application? Does it depend on the likely input size? This is the evaluation problem.
Terms you need to know
Algorithm efficiency
How the time and memory requirements of an algorithm grow as the input size increases.
Big-O notation
A mathematical notation that describes how an algorithm's performance scales with input size n. Examples: O(n), O(n2), O(log n).
O(1)
Constant time. Performance does not change with input size. Accessing an array by index is O(1).
O(log n)
Logarithmic time. Performance grows slowly even as input size increases dramatically. Binary search is O(log n).
O(n)
Linear time. Performance grows proportionally with input size. Linear search is O(n).
O(n2)
Quadratic time. Performance grows with the square of input size. Bubble and insertion sort are O(n2) in the worst case.
O(n log n)
Log-linear time. More efficient than O(n2) for large inputs. Merge sort is O(n log n) in all cases.
Measuring how algorithms scale

Big-O notation describes how the number of operations grows as the input size n increases. It captures the dominant term and ignores constants - we do not care whether an algorithm takes 3n or 5n operations; both are O(n) because they grow at the same rate.

Big-O tells us the order of growth, not the exact number of steps. It answers the question: "If I double the input size, how much more work does the algorithm do?"

ComplexityNameDouble n does what?n=10n=100n=1000Example
O(1)ConstantNo change111Array access by index
O(log n)Logarithmic+1 step3-4710Binary search
O(n)Linear2x work101001,000Linear search
O(n log n)Log-linear~2x work336649,966Merge sort
O(n2)Quadratic4x work10010,0001,000,000Bubble/insertion sort
Complexity Growth Comparison
Approximate operations required for each complexity class (n = input size)
Why Big-O matters in practice

For small inputs (n < 50), all these complexities look similar and the choice barely matters. For large inputs, the differences are enormous. Sorting 1,000,000 items:

O(n log n) = ~20,000,000 operations
O(n2) = 1,000,000,000,000 (one trillion) operations

A modern computer can do about 1 billion operations per second. O(n log n) takes 0.02 seconds. O(n2) takes 16 minutes. This is why choosing the right algorithm matters enormously for real applications.

All algorithms compared
AlgorithmTypeBest caseWorst caseSorted input required?Key advantage
Linear searchSearchO(1)O(n)NoWorks on any list
Binary searchSearchO(1)O(log n)YesExtremely fast for large sorted lists
Bubble sortSortO(n)*O(n2)NoSimple to understand and implement
Insertion sortSortO(n)O(n2)NoEfficient for nearly-sorted or small lists
Merge sortSortO(n log n)O(n log n)NoConsistent performance for large lists

* Bubble sort best case O(n) only with early termination optimisation enabled.

Exam angle

The most-tested evaluation questions are:
1. "Compare algorithm X and algorithm Y for a given scenario" - always discuss both best and worst case, and any requirements (e.g. sorted list).
2. "Which algorithm would be most suitable and why?" - always give a specific reason tied to the scenario (e.g. "the list is already sorted so binary search is appropriate").
3. "State the maximum number of comparisons for algorithm X on n items" - know: linear search = n, binary search = log2(n), bubble sort pass 1 = n-1.
4. "State one disadvantage of bubble sort compared to merge sort" - O(n2) vs O(n log n) worst case.

Which algorithm should I use?

Answer the questions below and we will recommend the most appropriate algorithm.

1. What is the task?
Model answers for common evaluation questions
Exam Question 1
A school stores 2,000 student records sorted alphabetically by surname. A receptionist needs to look up whether a specific student is enrolled. Compare binary search and linear search for this use case. State which is more appropriate and justify your answer. [4 marks]
Binary search is more appropriate.

Binary search worst case: log2(2000) = approximately 11 comparisons.
Linear search worst case: 2,000 comparisons.

Since the list is already sorted alphabetically, binary search can be used. It is significantly more efficient, requiring at most 11 comparisons to find any name, compared to up to 2,000 for linear search. For a receptionist performing many lookups throughout the day, this efficiency improvement is valuable.

[Award 1 mark for identifying binary search, 1 mark for stating it requires a sorted list (which is satisfied), 1 mark for the complexity comparison or numbers, 1 mark for a clear justification.]
Exam Question 2
A program needs to sort a list of 500 exam scores. The scores are entered throughout the day, one at a time, and must be kept in order as each new score is added. Which sorting algorithm is most suitable? Justify your answer. [3 marks]
Insertion sort is most suitable.

Insertion sort is an online algorithm - it can sort data as it arrives one element at a time without needing to restart the sort. When a new score is added, it is simply inserted into its correct position in the already-sorted list. The list of 500 items is also small enough that O(n2) worst-case performance is acceptable.

[Award 1 mark for naming insertion sort, 1 mark for explaining it handles one-at-a-time data (online algorithm), 1 mark for noting it is suitable for small lists or nearly-sorted data.]
Exam Question 3
State one advantage of merge sort over bubble sort. State one advantage of bubble sort over merge sort. [2 marks]
Merge sort advantage: Merge sort has a worst-case complexity of O(n log n) compared to bubble sort's O(n2). For large lists, merge sort requires significantly fewer operations, making it much faster. [1 mark]

Bubble sort advantage: Bubble sort is simpler to understand, implement and trace. It is also an in-place algorithm (requires only O(1) extra memory), whereas merge sort requires additional memory (O(n)) to store sub-lists during the merge phase. [1 mark for either point]
Exam Question 4
The worst-case number of comparisons for a binary search on a list of n items is given by log2(n). Calculate the maximum number of comparisons needed for a binary search on a list of 512 items. Show your working. [2 marks]
Working: log2(512) = 9, since 2^9 = 512. [1 mark for method]

Answer: Maximum 9 comparisons. [1 mark for correct answer]

Note: If asked about n=1000, log2(1000) = 9.97, which rounds up to 10 comparisons maximum. Always round up for maximum comparisons.

Test yourself

1. An algorithm with O(n2) complexity is run on a list of 100 items and takes 1 second. Approximately how long would it take for a list of 200 items?

O(n2) means doubling n quadruples the operations (2^2 = 4). So 200 items takes approximately 4 times as long as 100 items: about 4 seconds. This shows why O(n2) algorithms become unusable for large inputs.

2. Which complexity grows most slowly as n increases?

O(log n) grows the most slowly. As n doubles, O(log n) only increases by 1 step. For n=1,000,000, log2(1,000,000) = 20 steps. This is why binary search is so powerful for large sorted lists.

3. A list is known to be in random (unsorted) order with 1,000,000 items. Which algorithm should be used to find a specific value?

Linear search. The list is unsorted, so binary search cannot be used. Sorting 1,000,000 items first (to enable binary search) would take O(n log n) = ~20 million operations just to sort, which is far more than a single linear search of 1,000,000 steps. Unless the list will be searched many times, sorting first is not justified.

4. State the worst-case complexity of merge sort.

Merge sort is O(n log n) in all cases - best, average and worst. This predictable performance is one of its key advantages over bubble and insertion sort, which can degrade to O(n2) in the worst case.

5. An exam question asks "State one advantage of linear search over binary search." Which answer is correct?

The key advantage of linear search is that it does not require the list to be sorted. Binary search requires a sorted list to work correctly. This is the standard mark-scheme answer for this type of question. Always frame advantages and disadvantages in terms of requirements and complexity.
Series wrap-up challenge

You are designing a system to manage a music streaming library of 50 million songs. Users search by song title (very frequently - thousands of times per second). New songs are added daily. Existing songs are rarely removed. Discuss which search algorithm and which sort algorithm you would use, giving specific reasons based on the properties of each algorithm and the requirements of this system.

Search algorithm: Binary search
With 50 million songs and thousands of searches per second, efficiency is critical. Binary search has worst-case O(log n) = log2(50,000,000) = approximately 26 comparisons per search. Linear search would need up to 50,000,000 comparisons per search. At thousands of searches per second, binary search is the only feasible option. Requirement: the song list must be maintained in sorted order (by title).

Sort algorithm: Merge sort
When new songs are added daily, the list must be re-sorted. Merge sort is O(n log n) = approximately 1.3 billion operations for 50 million songs - fast and predictable. Bubble sort or insertion sort would require O(n2) = 2.5 quadrillion operations - completely infeasible. Insertion sort would be efficient if adding only a few songs to an already-sorted list (insert one at a time into the correct position), but merge sort handles bulk additions better.

Alternative approach: Maintain the list in a sorted data structure (like a balanced binary tree) so that insertion keeps it sorted automatically without a separate sort step. This is what real database systems do.
Printable Worksheets

Practice what you have learned

Three levelled worksheets. Download, print and complete offline.

Recall
Algorithm Evaluation Basics
Define Big-O terms, complete the complexity comparison table and match complexities to algorithms.
Download
Apply
Algorithm Selection
Given six scenarios, select and justify the most appropriate algorithm for each. Calculate maximum comparisons for given list sizes.
Download
Exam Style
Unit Exam Practice
Extended evaluation questions covering all algorithms in the series. Includes model answers and mark allocation guidance.
Download
Lesson 8 - Algorithms
Evaluating Algorithms
Starter activity
Scenario voting: present three algorithms and ask "which would you choose?" for each of three scenarios (searching a small unsorted list, searching a huge sorted database, sorting a nearly-sorted list of 10 items). Take a show of hands. Reveal the correct answers and ask students to articulate why. This exposes misconceptions before the lesson begins and motivates the need for a systematic evaluation framework.
Lesson objectives
1
Explain what Big-O notation measures and state the complexity of each algorithm studied.
2
Compare any two algorithms for a given scenario, addressing efficiency and requirements.
3
Calculate maximum comparisons for binary search and linear search on a given list size.
4
Select and justify the most appropriate algorithm for a described real-world scenario.
5
Answer structured exam questions on algorithm comparison with appropriate technical detail.
Key vocabulary
Big-O notation
Describes how performance scales with n. Ignore constants; focus on the dominant term.
O(log n)
Logarithmic. Doubling n adds only one step. Binary search.
O(n2)
Quadratic. Doubling n multiplies steps by 4. Bubble sort, insertion sort (worst case).
O(n log n)
Log-linear. Merge sort always. More efficient than O(n2) for large inputs.
Discussion questions
If two algorithms both produce the correct output, does efficiency always matter? When might you choose a less efficient algorithm?
Real sorting libraries use hybrid algorithms - for example, switching from merge sort to insertion sort for small sub-lists. Why might this hybrid approach be better than either alone?
Big-O describes worst-case scaling. Could an O(n2) algorithm ever outperform an O(n log n) algorithm in practice? Under what conditions?
Exit tickets
State the worst-case complexity of: (a) binary search, (b) bubble sort, (c) merge sort. [3 marks]
A list of 64 items is searched using binary search. State the maximum comparisons needed. [1 mark]
Give one reason to use linear search over binary search for a list of 10,000 items. [1 mark]
Homework suggestion
Research one real-world application where algorithm efficiency is critical (e.g. Google Search, genomics, financial trading, GPS navigation). Find out which search or sort algorithms are used (or approximations of them). Explain why the chosen algorithm is appropriate for that specific use case in terms of: input size, whether data is sorted, how frequently searches or sorts occur, and any time constraints.