algo-en
  • Introduction
  • I. Dynamic Programming
    • Dynamic Programming in Details
    • Classic DP: Edit Distance
    • Classic DP: Super Egg
    • Classic DP: Super Egg(Advanced Solution)
    • Class DP: Longest Common Subsequence
    • Classis DP: Game Problems
    • Regular Expression
    • The Strategies of Subsequence Problem
    • Greedy: Interval Scheduling
    • 4 Keys Keyboard
    • What is DP Optimal Substructure
    • Longest Increasing Subsequence
    • KMP Algorithm In Detail
    • House Robber Problems
    • Stock Buy and Sell Problems
  • II. Data Structure
    • Binary Heap and Priority Queue
    • LRU Cache Strategy in Detail
    • Collections of Binary Search Operations
    • Special Data Structure: Monotonic Stack
    • Special Data Structure: Monotonic Stack
    • Design Twitter
    • Reverse Part of Linked List via Recursion
    • What's the Best Algo Book
    • Queue Implement Stack/Stack implement Queue
    • Framework for learning data structures and algorithms
  • III. Algorithmic thinking
    • My Way to Learn Algorithm
    • The Framework for Backtracking Algorithm
    • Binary Search in Detail
    • The Tech of Double Pointer
    • The Key Concept of TwoSum Problems
    • Divide Complicated Problem: Implement a Calculator
    • Prefix Sum Skill
    • FloodFill Algorithm in Detail
    • Interval Scheduling: Interval Merging
    • Interval Scheduling: Intersections of Intervals
    • String Multiplication
    • Pancake Soring Algorithm
    • Sliding Window Algorithm
    • Some Useful Bit Manipulations
    • Russian Doll Envelopes Problem
    • Recursion In Detail
    • Backtracking Solve Subset/Permutation/Combination
    • Several counter-intuitive Probability Problems
    • Shuffle Algorithm
  • IV. High Frequency Interview Problem
    • How to Implement LRU Cache
    • How to Find Prime Number Efficiently
    • How to Calculate Minimium Edit Distance
    • How to Solve Drop Water Problem
    • How to Remove Duplicate From Sorted Sequence
    • How to Find Longest Palindromic Substring
    • How to Reverse Linked List in K Group
    • How to Check the Validation of Parenthesis
    • How to Find Missing Element
    • How to Pick Elements From a Arbitrary Sequence
    • How to use Binary Search
    • How to Scheduling Seats
    • Union-Find Algorithm in Detail
    • Union-Find Application
    • Find Subsequence With Binary Search
    • Problems can be solved by one line
    • How to Find Duplicate and Missing Element
    • How to Check Palindromic LinkedList
  • V. Common Knowledge
    • Difference Between Process and Thread in Linux
    • You Must Know About Linux Shell
    • You Must Know About Cookie and Session
    • Cryptology Algorithm
    • Some Good Online Pratice Platforms
Powered by GitBook
On this page

Was this helpful?

  1. IV. High Frequency Interview Problem

How to Remove Duplicate From Sorted Sequence

PreviousHow to Solve Drop Water ProblemNextHow to Find Longest Palindromic Substring

Last updated 1 year ago

Was this helpful?

Translator:

Author:

We know that for arrays,it is efficient to insert and delete elements at the end,with a time complexity of O(1).However, if we insert and delete elements at the middle or the beginning,it will move many data, with a time complexity of O(N).

Therefore, for the general algorithm problems dealing with arrays, we need to operate on the elements at the end of the array as much as possible to avoid additional time complexity

This article is on how to remove Duplicates from Sorted Array.

Obviously, since the array is sorted, the duplicate elements must be connected together, so it's not difficult to find them, but if you delete each duplicate element as soon as you find it, you're going to delete it in the middle of the array, and the total time complexity is going to be $O(N^2)$.And the problem asking us must do this by modifying the input array in-place with O(1) extra memory.

In fact,for the array related algorithm problem,there is a general technique: try to avoid deleting the element in the middle, then I want to find a way to swap the element to the last.In this way,the elements to be deleted are dragged to the end of the array and the time complexity of a single deletion is reduced to $O(1)$.

Through this idea, we can derive a common way to solve similar requirements——the two-pointer technique.To be specific, it should be fast or slow pointer.

We let the slow pointer slow go to the back of the array, and the fast pointer fast go ahead to find the way. If we find a unique element,let slow move forward. In this way, when the fast pointer traverses the entire array nums, nums [0..slow] is a unique element, and all subsequent elements are repeated elements.

int removeDuplicates(int[] nums) {
    int n = nums.length;
    int slow = 0, fast = 1;
    while (fast < n) {
        if (nums[fast] != nums[slow]) {
            slow++;
            // Maintain no repetition of nums[0..slow] 
            nums[slow] = nums[fast];
        }
        fast++;
    }
    //The length is index + 1 
    return slow + 1;
}

Look at the process of algorithm implementation:

Extending it briefly,how to remove Duplicates from Sorted list.In fact, it is exactly the same as an array.The only difference is that the array assignment operation is turned into an operation pointer:

ListNode deleteDuplicates(ListNode head) {
    if (head == null) return null;
    ListNode slow = head, fast = head.next;
    while (fast != null) {
        if (fast.val != slow.val) {
            // nums[slow] = nums[fast];
            slow.next = fast;
            // slow++;
            slow = slow.next;
        }
        // fast++
        fast = fast.next;
    }
    // The list disconnects from the following repeating elements
    slow.next = null;
    return head;
}
Hi_archer
labuladong