217. Contains Duplicate
Given an integer array nums
, return true
if any value appears at least twice in the array, and return false
if every element is distinct.
Example 1:
Input: nums = [1,2,3,1] Output: true
Example 2:
Input: nums = [1,2,3,4] Output: false
Example 3:
Input: nums = [1,1,1,3,3,4,3,2,4,2] Output: true
Constraints:
- `1 <= nums.length <= 105` - `-109 <= nums[i] <= 109`
Solution 1 (Brute Force)
Compare each element in the array to every other element.
Time Complexity: 𝑂(𝑛^2)
since each element is compared with every other element.
Space Complexity: 𝑂(1)
as no additional storage is required beyond the input.
class Solution {
public:
bool containsDuplicate(vector<int>& nums) {
int n = nums.size();
// Loop through each element in the array
for (int i = 0; i < n; i++) {
// Compare the current element with every other element
for (int j = i + 1; j < n; j++) {
// If a duplicate is found, return true
if (nums[i] == nums[j]) {
return true;
}
}
}
// If no duplicates are found, return false
return false;
}
};
In this solution you wil get Time Limit Exceeded (TLE)
and 70/75 testcases passed because it has high time complexity. This complexity implies that for each element in the array, the solution compares it against every other element. As the size of the array
𝑛
grows, the number of comparisons grows quadratically.
For example, if the array has thousands of elements, the solution will need to perform millions of comparisons.
Solution 2 (Sorting)
Sort the array first. If any duplicates exist, they will be adjacent in the sorted array.
Time Complexity: 𝑂(𝑛 log 𝑛)
due to the sorting step. Comparing adjacent elements takes 𝑂(𝑛)
so the dominating factor is the sorting.
Space Complexity:
Depends on the sorting algorithm. If using heapsort or mergesort, 𝑂(𝑛)
; for others like quicksort, 𝑂(log 𝑛)
.
class Solution {
public:
bool containsDuplicate(vector<int>& nums) {
if (nums.empty()) return false; // Check if the vector is empty
sort(nums.begin(), nums.end()); // Sort the vector
for (int i = 1; i < nums.size(); i++) {
// Compare each element with the previous one
if (nums[i] == nums[i - 1]) {
return true; // Return true if a duplicate is found
}
}
return false; // Return false if no duplicates are found
}
};
This solution will get you accepted when you submit it, and it is more efficient and practical solution for many scenarios, especially when compared to the brute force method. By sorting the array first, we can reduce the problem of finding duplicates to simply checking adjacent elements, leveraging the property that all identical elements in a sorted array are contiguous. This method significantly reduces the complexity from 𝑂(𝑛^2)
in the brute force approach to 𝑂(𝑛 log 𝑛)
due to the sorting process, making it much more scalable for larger datasets.
However, while this approach improves efficiency, it still has a couple of drawbacks:
- Modification of Input: Sorting changes the original order of elements in the array, which might not be desirable in all cases.
- Space Complexity: Depending on the sorting algorithm used (e.g., mergesort), this approach might require
𝑂(𝑛)
extra space, although in-place sorting algorithms like heapsort or quicksort could reduce the space requirement to𝑂(log 𝑛)
Solution 3 (Hash Table)
Use a hash table (like a Python set or dictionary) to track elements as you iterate through the array. If an element is already in the hash table, a duplicate exists.
Time Complexity: 𝑂(𝑛)
as inserting into and searching a hash table has an average-case time complexity of 𝑂(1)
Space Complexity: 𝑂(𝑛)
in the worst case, as you potentially need to insert every element into the hash table.
class Solution {
public:
bool containsDuplicate(vector<int>& nums) {
unordered_set<int> seen; // Create an unordered set to store the elements
for (int num : nums) { // Iterate over each element in the vector
if (seen.count(num)) { // Check if the element is already in the set
return true; // Return true if a duplicate is found
}
seen.insert(num); // Insert the element into the set
}
return false; // Return false if no duplicates are found
}
};
Hashing checks for duplicates in linear time, 𝑂(𝑛)
, by using a hash table to track elements as they appear. This method not only maintains the original order of the elements but also allows for immediate detection of duplicates, making it highly efficient and practical, especially when quick processing is needed. Thus, this is the best solution out of the 3.