Photo AI
Last Updated Sep 26, 2025
Revision notes with simplified explanations to understand Algorithms quickly and effectively.
247+ students studying
An algorithm is a finite, well-defined sequence of instructions or steps designed to perform a specific task or solve a particular problem.
Characteristics of Algorithms
Finite : An algorithm must have a finite number of steps. It should terminate after a certain number of steps, producing a result or recognising that no solution exists.
Unambiguous : Each step of an algorithm must be precisely defined and unambiguous. The actions to be performed should be clear and well-understood.
Time complexity is a measure of the amount of computational time that an algorithm takes to complete as a function of the size of the input.
Time Complexity Example
Consider a list of integers. We want to determine the time complexity of performing a linear search to find a specific integer.
The steps for linear search are as follows :
Time complexity is always measured in terms of the worst case, which gives an upper bound on the resources required by the algorithm.
Let's take some instances of the linear search algorithm for different size of .
comparisons (worst case) | |
---|---|
1 | 1 |
10 | 10 |
20 | 20 |
30 | 30 |
There is a linear relationship between the size the of the input and the maximum number of comparisons needed.
If we were to continue this table and plot them on a graph, where the axis is the input size and the axis is the time (measured in number of comparisons), we would get this :
Linear search has linear time complexity, which means that the amount of work is linearly proportionate to the input size n.
Big O notation is a mathematical notation used to classify algorithms according to how their running time grows as the input size increase.
Since we concluded that linear search has linear time complexity, it is a algorithm
Algorithms said to run in constant time are those whose time complexity does not change regardless of the size of the input. It is denoted as .
For example, imagine an algorithm that returns the first element of a list. Regardless of the size of the list, the first element will always get returned, since there is no additional work required as the input size increases.
Algorithms said to run in quadratic time are those whose time complexity grows quadratically in respect to the input size. It is denoted as .
For example, consider a grid that has rows and columns. The aim of the algorithm is move a token from the starting position of the grid to the end position of the grid. The steps are as follows :
We can see different input sizes for . The work is measured in terms of the amount of moves that are needed until the token ends up in the final position (denoted in gray).
moves | |
---|---|
1 | 1 |
2 | 4 |
3 | 9 |
From observation we can see that the table follows a quadratic pattern. The time complexity of this algorithms grow quadratically in respect to the input size .
Sorting is a fundamental operation in computer science with wide-ranging applications across various domains.
It involves arranging data is a specific order, which implies that the nature of the data needs to be comparable.
There are several sorting algorithms in computer science, the Leaving Certificate syllabus focuses on four specific algorithms for sorting. For each algorithm, you should :
Introduction
Selection sort divides the input list into two parts: a sorted part at the left end and an unsorted part at the right end. The algorithm repeatedly selects the smallest (or largest, depending on sorting order) element from the unsorted part and swaps it with the first element of the unsorted part, thereby growing the sorted part by one element and shrinking the unsorted part by one element.
Steps
Example
Here is a visual example of selection sort, where red represents the unsorted part of the list, green represents the sorted part of the list and white represents the smallest element found at each iteration.
Implementation
def selection_sort(arr):
n = len(arr)
for i in range(n):
min_index = i
for j in range(i + 1, n):
if arr[j] < arr[min_index]:
min_index = j
arr[i], arr[min_index] = arr[min_index], arr[i]
return arr
Time Complexity
Recall that time complexity assumes the worst-case of the algorithm. For a list of size n.
The total number of comparisons is which equals to . This can be proved using induction, which is outside of the scope for this chapter.
When deriving time complexity, we ignore constant coefficients and ignore less significant terms.
The time complexity of selection sort is .
Introduction
Insertion sort works similarly to the way you might sort playing cards in your hands.
The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are picked and placed at the correct position in the sorted part.
Steps
Example
The visual example illustrates two stages for each iteration.
Implementation
def insertion_sort(arr):
# Traverse from arr[1] to arr[n-1]
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
while j >= 0 and key < arr[j]:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
return arr
key = arr[i]
) is the element to be inserted into the sorted portion of the array.while j >= 0 and key < arr[j]
) shifts elements that are greater than the key one position to the right.key < arr[j]
) ensures that we find the correct position for the key.arr[j + 1] = key
).Time Complexity
In the worst-case scenario, the array is sorted in reverse order. This means that each element has to be compared with all previous elements, leading to the maximum number of comparisons and shifts.
The total number of comparisons made by the inner loop for all iterations of the outer loop is the sum of the first integers.
Again, this can be proved with induction which is outside of the scope for this chapter.
Insertion sort has a time complexity of .
You might have noticed that algorithms involving a nested loop have time complexities of .
Introduction
Bubble sort repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted.
Steps
Example
Each pass involves comparing each pair of adjacent elements. Red indicates that these pairs are in the wrong order, which need to be swapper and green indicates that these pairs are in the right order and don't need to be swapper.
Implementation
def bubble_sort(arr):
n = len(arr)
for i in range(n):
# Track whether any swaps were made during this iteration
swapped = False
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
swapped = True
# If no swaps were made, the array is already sorted
if not swapped:
break
return arr
swapped
) is used to track whether any swaps were made during the current iteration.Time Complexity
Bubble sort has a time complexity of .
Introduction
Quicksort is a divide-and-conquer algorithm that works by selecting a "pivot" element from an array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot.
Steps
Example
In this example, blue represents the pivot elements.
Note that the pivot doesn't necessarily have to be the last element.
Implementation
def quicksort(arr):
# Base case: if the array is empty or has one element, it's already sorted
if len(arr) <= 1:
return arr
# Choose a pivot element
pivot = arr[len(arr) - 1]
# Partition the array into three lists
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
# Recursively sort the left and right sub-arrays and combine the results
return quicksort(left) + middle + quicksort(right)
arr[len(arr) // 2]
) for simplicity.left
, middle
, and right
based on comparison with the pivot.left
contains elements less than the pivot.middle
contains elements equal to the pivot.right
contains elements greater than the pivot.left
and right
arrays, then combines the sorted left
, middle
, and right
arrays to form the final sorted array.Time Complexity
Quick sort has a time complexity of .
The Leaving Certificate focuses on two searching algorithms, linear search and binary search. We have already seen linear search earlier in the chapter including it's time complexity.
Introduction
Binary search works by repeatedly dividing the search interval in half, which makes it much faster than linear search for large datasets.
Steps
left
at the beginning of the array and right
at the end.left
and right
.right
boundary to mid - 1
, narrowing the search to the left half.left
boundary to mid + 1
, narrowing the search to the right half.left
pointer surpasses the right
pointer, indicating that the target is not present in the array.Example
In this example, the blue represents the midpoint element.
Implementation
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = left + (right - left) // 2 # Calculate the mid-point
# Check if the target is at the mid-point
if arr[mid] == target:
return mid # Target found
# Adjust the search range
elif arr[mid] < target:
left = mid + 1 # Search in the right half
else:
right = mid - 1 # Search in the left half
return -1 # Target not found
left
and right
, to the start and end of the array.while
loop that continues as long as left
is less than or equal to right
.left + (right - left) // 2
to prevent potential overflow issues.mid
equals the target, the index mid
is returned.mid
, the right
boundary is updated to mid - 1
.left
boundary is updated to mid + 1
.Enhance your understanding with flashcards, quizzes, and exams—designed to help you grasp key concepts, reinforce learning, and master any topic with confidence!
30 flashcards
Flashcards on Algorithms
Revise key concepts with interactive flashcards.
Try Computer Science Flashcards3 quizzes
Quizzes on Algorithms
Test your knowledge with fun and engaging quizzes.
Try Computer Science Quizzes27 questions
Exam questions on Algorithms
Boost your confidence with real exam questions.
Try Computer Science Questions1 exams created
Exam Builder on Algorithms
Create custom exams across topics for better practice!
Try Computer Science exam builder27 papers
Past Papers on Algorithms
Practice past papers to reinforce exam experience.
Try Computer Science Past PapersDiscover More Revision Notes Related to Algorithms to Deepen Your Understanding and Improve Your Mastery
Join 500,000+ Leaving Cert students using SimpleStudy...
Join Thousands of Leaving Cert Students Using SimpleStudy to Learn Smarter, Stay Organized, and Boost Their Grades with Confidence!
Report Improved Results
Recommend to friends
Students Supported
Questions answered