Y
Published on

An Exhaustive Tutorial on Algorithmic Patterns for LeetCode Mastery

Authors
  • avatar
    Name
    Yinhuan Yuan
    Twitter

An Exhaustive Tutorial on Algorithmic Patterns for LeetCode Mastery

I. Foundational Algorithmic Thinking: Pattern Recognition and ROI

A. The Philosophy of Pattern-Based Problem Solving

The ability to solve complex algorithmic challenges efficiently hinges not on memorizing individual solutions, but on recognizing underlying, generalized solution templates known as algorithmic patterns. These patterns provide a standardized framework for transforming computationally expensive, often brute-force approaches into optimized solutions.
Algorithmic patterns are essential because they serve as blueprints for solving classes of problems that share a common structure, whether that structure involves sequential data processing (arrays/strings), state management (dynamic programming), or connectivity (graphs/trees). Mastery involves moving beyond nested loops, which frequently lead to O(N^2) or higher time complexity, and adopting techniques that leverage data structure properties to achieve linear O(N) or logarithmic O(\log N) performance. This strategic application of patterns yields the highest Return on Investment (ROI) in algorithmic practice, particularly for high-stakes technical interviews.

B. High-ROI Patterns: A Categorized Taxonomy

Algorithmic patterns can be categorized based on the problem domain and the underlying approach used for optimization. The patterns with the highest ROI—those appearing most frequently in challenge platforms—span linear structures, search, optimization, and graph connectivity.
Key algorithmic techniques that demonstrate superior complexity reduction include Depth First Search (DFS), Breadth First Search (BFS), Binary Search, Two Pointers, Sliding Window, Monotonic Stack, and Dynamic Programming (DP). By viewing pattern learning as a process of complexity transformation, practitioners can systematically select the template that offers the most computational leverage for a given input size N.
The following table summarizes the core patterns, their general application context, and the typical complexity achieved.
Table 1: Top Algorithmic Patterns and ROI

Pattern CategoryCore IdeaCanonical ComplexityPrimary Data Structures
Two PointersLinear traversal to reduce O(N^2) checks to O(N).O(N)Arrays, Strings, Linked Lists
Sliding WindowEfficiently calculate metrics on contiguous subarrays/substrings.O(N) (Amortized)Arrays, Strings, Hash Tables
Dynamic Programming (DP)Solve overlapping subproblems using stored optimal substructures.Varies (O(N) to O(N^2))Arrays, Tables (Memoization/Tabulation)
Union-Find (DSU)Efficiently manage disjoint sets and check connectivity.O(\alpha(N)) (Amortized)Graph Edges, Arrays (Parent/Rank)
BFSLayered exploration for finding shortest paths (unit cost).O(V + E)Graphs, Queues

C. Complexity Analysis Review: The O(N^2) \to O(N) Transformation

A cornerstone of algorithmic optimization is the reduction of time complexity from quadratic (O(N^2)) to linear (O(N)) or linearithmic (O(N \log N)). Many naive solutions, such as checking every possible subarray or iterating through every pair of elements using nested loops, naturally default to O(N^2). The power of patterns like Two Pointers and Sliding Window lies precisely in their ability to replace this internal iteration with a single, efficient operation, maintaining O(1) work per overall loop iteration, thereby achieving an overall O(N) time complexity.
Furthermore, understanding amortized complexity is crucial for analyzing patterns like Sliding Window and Union-Find. Amortized analysis considers the total cost over a sequence of operations rather than the worst-case cost of a single operation. For instance, while a single operation in a variable-sized sliding window might involve a contraction loop, the total time spent contracting over the entire array traversal is proportional to the array size N, making the overall complexity O(N) amortized time.

II. Linear Structure Traversal and Optimization (Arrays, Strings, Linked Lists)

A. The Two Pointers Paradigm

  1. Core Intuition and Mechanics The Two Pointers technique is an algorithmic approach that uses two distinct pointers (or indices) to traverse a linear data structure, such as an array, string, or linked list, simultaneously. This strategy opens a new dimension of possibilities for comparison and manipulation that is significantly more efficient than nested iteration, reducing complexity from O(N^2) to O(N) in many linear traversal problems. These pointers generally serve two different but supplementary purposes, often used to track a specific boundary or a current state.
  2. Strategy 1: Inward (Opposite Direction) Traversal In this approach, one pointer starts at the beginning (left) and the other starts at the end (right) of the data structure, moving inward toward the center. The pointers adjust their positions based on comparisons until a certain condition is met, or they meet/cross each other.
    • Applications: This strategy is particularly useful for problems requiring checks between elements at opposite ends, such as determining if a string is a palindrome (skipping non-alphanumeric characters) , or finding pairs of elements that sum to a specific target in a sorted array (Two Sum II).
    • Complexity Context (e.g., 3Sum): For problems like 3Sum, which require finding triplets, the Two Pointers technique is instrumental in optimizing the solution. After an initial O(N \log N) sorting step, an outer loop iterates through each element, while the inner logic uses two converging pointers to find the remaining pair in O(N) time. This combined approach results in an optimal total time complexity of O(N^2).
  3. Strategy 2: Unidirectional Traversal (Same Direction) Here, both pointers start at the same end, typically the beginning, and move in the same direction. They often track different aspects of the state: one might be a "fast runner" finding information, and the other a "slow runner" keeping track of processed information.
    • Applications: Common use cases include array partitioning (e.g., separating non-zero elements), removing duplicates in place, or merging two sorted arrays.
  4. Strategy 3: Fast and Slow Pointers (Staged Traversal) This specialized variant is commonly applied to linked lists. The pointers move at different rates—the "fast runner" moves two steps, and the "slow runner" moves one step per iteration. This staged traversal allows the algorithm to relate the positions of distant nodes efficiently.
    • Applications: The primary use is cycle detection (Floyd's Tortoise and Hare algorithm), finding the midpoint of a linked list, or locating the start of a cycle.

B. The Sliding Window Technique (Contiguous Subsequences)

  1. Core Intuition and Complexity The Sliding Window technique is an efficient algorithmic paradigm used to solve problems involving finding the maximum, minimum, or target metric across all contiguous subarrays or substrings. The core idea is to maintain a subset of elements (the "window") that slides through the array or string. This method drastically reduces the time complexity from O(N^2) (checking all possible subarrays) to O(N) (a single linear pass). The efficiency is maintained by deriving the next state using only the current state and the elements entering or leaving the window.
  2. Type 1: Fixed-Size Window In this variant, the window size K remains constant throughout the iteration.
    • Mechanics: The window is initialized to size K. In each step, the window slides one position to the right by incrementing both the left (L) and right (R) pointers. Updating the window metric (e.g., sum or average) is an O(1) operation, as it only requires subtracting the element leaving the window and adding the element entering it.
    • Application: Finding the maximum sum of K consecutive elements, or identifying repeated DNA sequences of a fixed length (e.g., 10 characters).
  3. Type 2: Variable-Size Window This type is employed when the required size of the window is not fixed but must satisfy a specific condition (e.g., smallest subarray with sum \geq S, or longest substring without repeating characters).
    • Mechanics: The window expands by moving the right pointer (R) to include new elements until a required condition (inclusion) is met or a constraint is violated (exclusion). If the constraint is violated, the window contracts by moving the left pointer (L) until the condition is restored.
    • Auxiliary Structures: Variable-size windows typically require auxiliary data structures, such as a Hash Map or Frequency Array, to track the state of elements within the window, enabling O(1) lookups and updates.
    • Amortized Analysis: Although the window contraction step involves an internal loop, the overall process is guaranteed to run in O(N) amortized time. This linearity is achieved because the left pointer (L) and the right pointer (R) only traverse the input sequence once, moving strictly forward. Since each element is added and removed from the window state at most once, the total number of operations remains proportional to 2N, which simplifies to O(N) time complexity.
    • Canonical Example: Finding the length of the Longest Substring Without Repeating Characters, where the window shrinks upon encountering a duplicate character using a Hash Map to efficiently track character indices.

III. Search and Optimization Patterns

A. Recursion and Backtracking (The Decision Tree)

  1. Fundamental Principles Backtracking is a generalization of Depth-First Search (DFS) used to solve constraint satisfaction problems or combinatorial problems by exploring potential solutions incrementally. The algorithm systematically builds a candidate solution, and if at any point the candidate is deemed infeasible (it violates the problem's constraints), the algorithm abandons that path and "backtracks" to the previous decision point to try a different choice. This exhaustive but pruned search explores the implicit state-space tree.
  2. The Backtracking Mechanism The core process follows the paradigm of Choose, Explore, and Unchoose :
    • Make a Decision (Choose): At the current step (or node in the decision tree), the algorithm makes a choice that moves it forward.
    • Explore: It recursively calls itself to explore the consequences of that decision. If this path leads to a solution, it may return it.
    • Undo the Decision (Backtrack/Unchoose): If a dead end is reached, or if the algorithm needs to find all solutions, it must restore the state by undoing the last decision. This state restoration allows the algorithm to try the next viable alternative from the same parent node, ensuring all possibilities are considered systematically.
    • Applications: Backtracking is the standard approach for generating permutations, combinations, and subsets , as well as solving classical puzzle problems such as N-Queens or Sudoku solvers.

B. Dynamic Programming (DP): Memoization and Tabulation

  1. Core Criteria for Applicability Dynamic Programming (DP) is both a mathematical optimization method and an algorithmic paradigm used to solve complex problems by breaking them into simpler, overlapping subproblems. DP is applicable only if the problem exhibits two key properties :
    • Optimal Substructure: The optimal solution to the overall problem can be constructed efficiently from the optimal solutions of its subproblems.
    • Overlapping Subproblems: The recursive solution repeatedly calculates the solutions for the same subproblems.
  2. The DP Framework Solving a DP problem systematically requires defining four elements :
    • State Definition: Clearly define the parameter(s) that uniquely identify a subproblem. These parameters form the index or dimensions of the DP table/array.
    • Recurrence Relation (Transition Function): Formulate a relationship that expresses the solution to the current problem state in terms of the solutions to smaller, already solved subproblems. In optimization literature, this relationship is known as the Bellman equation.
    • Base Cases: Identify the simplest, non-recursive subproblems that can be solved directly.
    • Solution Extraction: Retrieve the final solution from the computed DP table or cache.
  3. Approach 1: Top-Down DP (Memoization) Memoization is a top-down approach where the algorithm starts at the main problem and uses recursion to break it down.
    • Mechanism: To avoid redundant computation, the results of expensive function calls (subproblems) are stored in a cache or dictionary (memo). Before computing a state, the algorithm first checks the cache; if the result exists, it is returned immediately.
    • Advantage: This approach is often easier to conceptualize as it follows the problem's natural recursive structure.
  4. Approach 2: Bottom-Up DP (Tabulation) Tabulation is a bottom-up, iterative approach where the DP table is constructed starting from the base cases and systematically building up to the final required solution.
    • Mechanism: The solution is built iteratively using the defined recurrence relation to fill the DP table, ensuring that all necessary dependencies (smaller subproblems) are computed before they are needed for larger problems.
    • Advantage: This method avoids the function call overhead associated with recursion and eliminates the risk of stack overflow for problems with very large input size N.
  5. Advanced Techniques: Space Optimization In many DP problems, particularly those involving sequences (like Fibonacci, Longest Increasing Subsequence, or House Robber), the calculation of the current state DP[i] often depends only on a fixed, small number of preceding states (e.g., DP[i-1] and DP[i-2]). In such cases, the space complexity, which might initially be O(N) or O(N^2) for the DP table, can be optimized to O(1) by simply storing the few previous variables necessary for the transition, rather than the entire table.

IV. Graph, Tree, and Connectivity Structures

A. Depth-First Search (DFS) in Trees and Graphs

  1. Mechanism and Applications DFS is an exploration technique that traverses deeply along a single path until it reaches an end (a leaf node or a visited node) before backtracking. It is naturally implemented using recursion (which implicitly uses the call stack) or an explicit stack data structure. DFS is favored when the goal is an exhaustive search, checking connectivity, finding all paths, or traversing an implied decision tree (as in backtracking).
  2. Tree Traversal Orders When applied to trees, particularly binary trees, DFS defines specific traversal orders :
    • Pre-order (Root, Left, Right): The node is visited before its children. Useful for creating a copy or performing an operation before processing descendants.
    • In-order (Left, Root, Right): Primarily meaningful for Binary Search Trees (BSTs), as it visits the nodes in sorted (non-decreasing) order.
    • Post-order (Left, Right, Root): The node is visited only after all its subtrees have been traversed. Useful for operations like node deletion or evaluating expression trees bottom-up.

Table 2: DFS Traversal Orders

Traversal OrderRule (Node, Left, Right)Typical Use CaseGeneralizes To
Pre-orderNode, Left, RightCreating a copy or deep cloning of the tree structure.N-ary trees
In-orderLeft, Node, RightVisiting elements in sorted order (for BSTs).Binary Trees only
Post-orderLeft, Right, NodeDeleting nodes or evaluating expressions bottom-up.N-ary trees

B. Breadth-First Search (BFS) and Shortest Paths

  1. Mechanism and Core Strength BFS explores a graph layer by layer, visiting all neighbors at distance k from the starting node before moving to nodes at distance k+1. It is implemented using a queue data structure. The fundamental advantage of BFS is that, for unweighted graphs (where all edges have a unit cost), the first time a node is discovered, its distance from the root node is guaranteed to be the shortest path. * Applications: BFS is the definitive algorithm for finding the shortest path in unweighted graphs, solving maze problems, and performing level-order traversal of trees. The complexity is O(V + E), where V is the number of vertices and E is the number of edges.
  2. Advanced Variant: Multi-Source BFS This variation is utilized when the objective is to find the minimum distance from any of a set of starting nodes to all other nodes.
    • Initialization: Instead of starting with a single root, all source nodes are added to the queue during initialization.
    • Efficiency: This is computationally more efficient than running multiple single-source BFS algorithms. It ensures that the shortest path found to any target node is indeed from the closest possible source.
    • Canonical Example: Solving the 01 Matrix problem, where the distance to the nearest '0' cell must be calculated for every cell.

C. Topological Sort (Directed Acyclic Graphs - DAGs)

  1. Prerequisites and Definition Topological Sort is an algorithm that produces a linear ordering of vertices in a Directed Acyclic Graph (DAG) such that for every directed edge from vertex U to vertex V, U must appear before V in the ordering. It is fundamental for sequencing tasks or dependencies. If a graph contains a cycle, a topological sort is impossible.
  2. Kahn’s Algorithm (BFS-based) This method is generally preferred due to its simplicity and inherent cycle detection capability.
    • Approach: It calculates the indegree (the number of incoming edges) for every vertex. All vertices with an indegree of 0 are initialized in a queue. As nodes are removed from the queue, their outgoing edges are processed, and the indegree of their neighbors is decremented. If a neighbor's indegree drops to 0, it is added to the queue.
    • Cycle Detection: If the resulting ordering contains fewer than V vertices, a cycle exists.
  3. DFS Approach The DFS-based topological sort uses a stack. A node is pushed onto the stack only after all of its dependent nodes (those reachable from it) have been fully explored. Reading the stack contents from top to bottom yields the topological ordering.

D. Union-Find (Disjoint Set Union - DSU)

  1. Core Purpose and Operations The Disjoint Set Union (DSU) or Union-Find data structure manages a collection of elements partitioned into non-overlapping sets. It is the optimal pattern for dynamic connectivity problems in undirected graphs.
    • Find(x): Returns the representative (or ultimate parent/root) of the set containing element x. This allows efficient checking of whether two elements belong to the same set: a and b are connected if Find(a) == Find(b). * Union(x, y): Merges the two sets containing elements x and y.
  2. Critical Optimizations for Performance To avoid the worst-case scenario of O(N) time for a single operation (which occurs when the structure degrades into a single linear chain), two essential structural optimizations are implemented:
    • Path Compression (Optimization for Find): During the Find(x) operation, this technique flattens the tree by recursively pointing every visited node directly to the set's root element. This significantly shortens the path length for all future operations involving these nodes.
    • Union by Rank or Size (Optimization for Union): When merging two sets, the algorithm ensures that the smaller tree structure is always attached beneath the root of the larger structure (based on rank or size/weight). This prevents the tree height from growing excessively, maintaining a balanced forest structure.
  3. Near-Constant Complexity With both Path Compression and Union by Rank/Size implemented, the amortized time complexity for both Find and Union operations approaches near-constant time, specifically O(\alpha(N)), where \alpha(N) is the inverse Ackermann function. This function grows so slowly that, for all practical input sizes, it is considered essentially constant, making DSU asymptotically optimal and highly efficient for problems like Kruskal's Minimum Spanning Tree algorithm and connected component tracking.

V. Specialized Structures and Constraint-Based Patterns

A. Binary Search and Monotonicity

  1. Standard and Modified Binary Search Binary search is a logarithmic time O(\log N) search technique applicable only to sorted data. It rapidly halves the search space in each step. The complexity of the standard search is O(\log N) time and O(1) space.
  2. Binary Search on the Answer (Modified Binary Search) The more sophisticated application of this pattern is not searching for an element within a sorted array, but searching for the optimal answer within a defined range of possible solutions. This technique is applicable only when the feasibility function f(X) exhibits monotonicity.
    • Monotonic Property: If a candidate answer X is valid (f(X) = true), then all potential answers Y on one side of X (e.g., Y \leq X for finding the maximum valid answer) must also be valid.
    • Mechanism: An initial search space $$ of possible solution values is defined. A midpoint XmidX_{mid} is tested using a problem-specific verification function f(Xmid)f(X_{mid}). If f(Xmid)f(X_{mid}) is true, XmidX_{mid} is recorded as a possible solution, and the search space is narrowed toward the optimal direction (e.g., L=Xmid+1L = X_{mid} + 1). If false, the search space is narrowed away from XmidX_{mid}. This process transforms a complex optimization problem into a sequence of rapid O(\log N) existence checks.
    • Applications: Minimizing the maximum value, maximizing the minimum value, finding bounds (e.g., Koko Eating Bananas, or finding the largest integer n such that n^2 < X).

B. Monotonic Stack and Queue

  1. Definition and Purpose A Monotonic Stack (or Queue) is a variation of the standard structure where the elements stored are kept strictly in either non-increasing or non-decreasing order. This property is maintained by selectively popping elements that violate the monotonicity when a new element is introduced. The primary purpose of this pattern is to find the "Next Greater Element" (NGE), "Next Smaller Element," or related structural dependencies for every element in a sequence.
  2. Linear Time Mechanism The elegance of the Monotonic Stack lies in its O(N) time complexity. When traversing the array, the stack holds indices or values that are "waiting" to find their NGE. If the current element A[i] is greater than the element at the top of the stack, the top element is popped because A[i] is now its NGE. This popping continues until the stack’s monotonic property is satisfied. Since every element is pushed onto the stack at most once and popped from the stack at most once, the total time complexity remains linear, O(N).
    • Applications: Problems such as Daily Temperatures, finding the Next Greater Element, and calculating the largest rectangle in a histogram.

C. Heap Data Structure (Priority Queue)

  1. Canonical Use Case: Top 'K' Elements Heaps, implemented as Priority Queues, are the most efficient data structure for solving problems that require finding the K largest or K smallest elements in a dataset or stream (the Top 'K' pattern).
  2. Mechanism for Top K Largest To find the K largest elements, a Min-Heap of size K is employed. The algorithm iterates through the input data:
    • If the heap size is less than K, the element is pushed onto the heap.
    • If the heap is full, the new element is compared against the heap's root (which is the smallest element currently in the top K). * If the new element is larger than the root, the root is popped (discarding the current smallest among the top K), and the new, larger element is pushed. This maintains the invariant that the heap contains the K largest elements seen so far.

3. Complexity: Since there are N elements to process, and each insertion/deletion takes O(\log K) time (due to the heap property maintenance), the overall time complexity is O(N \log K). This is significantly better than sorting the entire array, which would require O(N \log N) time, especially when K is much smaller than N.

D. Interval Management (Merge Intervals Pattern)

  1. Problem Scope and Strategy The Merge Intervals pattern is used to handle problems involving overlapping ranges, such as time slots, scheduling conflicts, or intervals on a number line. The effectiveness of this pattern relies heavily on data preprocessing.
  2. Core Requirement: The required optimal strategy demands that the intervals must first be sorted based on their starting coordinate or time. This initial sorting step takes O(N \log N) time and is the dominant factor in the overall complexity.
  3. Merging Process: Once sorted, the merging process requires only a single linear pass (O(N)). The algorithm iterates through the sorted list, comparing the current interval with the last merged interval. If they overlap, the last merged interval is updated (merged); otherwise, the current interval is added as a new, distinct interval. The total time complexity remains O(N \log N).

VI. Conclusion: Integrating Patterns into a Study Strategy

Achieving fluency in competitive programming requires transitioning from solving problems ad-hoc to systematic pattern recognition. The fundamental objective behind mastering the patterns detailed above—Two Pointers, Sliding Window, DP, and Union-Find—is the ability to recognize and execute sophisticated complexity transformations, usually from O(N^2) to O(N) or better. The choice between seemingly similar graph traversals, such as DFS and BFS, must be dictated by the problem's goal: BFS is structurally necessary for minimum step/shortest unweighted path problems, whereas DFS/Backtracking is ideal for exhaustive state exploration. Similarly, specialized patterns like Monotonic Stack and Binary Search on the Answer provide necessary O(N) and O(\log N) optimizations in constrained array scenarios that brute force cannot address.
For advanced connectivity problems, the superior, near-constant performance of the Union-Find structure is entirely dependent on rigorously implementing the performance boosters (Path Compression and Union by Rank/Size). This illustrates that pattern mastery extends beyond basic implementation to the nuanced understanding of underlying optimization mechanics and complexity guarantees. Systematic practice focusing on these core templates, ensuring full understanding of their complexity characteristics and applicability criteria, is the most direct route to algorithmic proficiency.

引用的文献

1. 10 Top LeetCode Patterns to Crack FAANG Coding Interviews - Design Gurus, https://www.designgurus.io/blog/top-lc-patterns 2. Only 15 patterns to master any coding interview Subscribe | by Manralai - Medium, https://manralai.medium.com/only-15-patterns-to-master-any-coding-interview-570a3afc9042 3. 10+ top LeetCode patterns (2025) to ace FAANG coding interviews - Educative.io, https://www.educative.io/blog/coding-interview-leetcode-patterns 4. Introduction to Two Pointers - ByteByteGo | Technical Interview Prep, https://bytebytego.com/courses/coding-patterns/two-pointers/introduction-to-two-pointers?fpr=javarevisited 5. Mastering Sliding Window Techniques: A Comprehensive Guide for Coding Interviews, https://algocademy.com/blog/mastering-sliding-window-techniques-a-comprehensive-guide-for-coding-interviews/ 6. Top 23 Leetcode Patterns to Simplify Interview Prep and Save Time, https://www.interviewcoder.co/blog/leetcode-patterns 7. Day -4 Understanding the Longest Substring Without Repeating Characters:, https://medium.com/@yaduvanshineelam09/day-4-understanding-the-longest-substring-without-repeating-characters-3193de15d63e 8. Disjoint-set data structure - Wikipedia, https://en.wikipedia.org/wiki/Disjoint-set\_data\_structure 9. Disjoint Set Union - Algorithms for Competitive Programming, https://cp-algorithms.com/data\_structures/disjoint\_set\_union.html 10. Dissecting the two pointer technique - DEV Community, https://dev.to/charlesamakoye/dissecting-the-two-pointer-technique-2lb4 11. Valid Palindrome | Approach 2 Two Pointers - NamasteDev Blogs, https://namastedev.com/blog/valid-palindrome-approach-2-two-pointers/ 12. Palindrome String - GeeksforGeeks, https://www.geeksforgeeks.org/dsa/palindrome-string/ 13. Ace Your Next Interview: Solving the 3Sum Problem with Two Pointers in JavaScript | by Amin Akbari | Medium, https://medium.com/@aminakbari.dev/ace-your-next-interview-solving-the-3sum-problem-with-two-pointers-in-javascript-a2c6f41b15b0 14. Algorithm Templates: Two Pointers - Part 1 - Pluralsight, https://www.pluralsight.com/resources/blog/guides/algorithm-templates-two-pointers-part-1 15. The Sliding Window Pattern, https://nan-archive.vercel.app/sliding-window 16. Mastering the Sliding Window Algorithm: Fixed vs. Dynamic | by Sahan Wickramage, https://medium.com/@sahawick/mastering-the-sliding-window-algorithm-fixed-vs-dynamic-6d71551b8141 17. Mastering the Sliding Window Technique: A Comprehensive Guide - AlgoCademy, https://algocademy.com/blog/mastering-the-sliding-window-technique-a-comprehensive-guide/ 18. Mastering the Sliding Window Technique: 4 LeetCode Problems You Must Know - Medium, https://medium.com/@aman20aug/mastering-the-sliding-window-technique-4-leetcode-problems-you-must-know-f59d8105ef53 19. Sliding Window: Variable Length + Fixed Length - AlgoMap Bootcamp, https://algomap.io/lessons/sliding-window 20. Understanding the variable sized sliding window pattern | Array - Codeintuition, https://www.codeintuition.io/courses/array/tFob\_VIXJQJ8bUN3hlSMB 21. Data Structures: Longest substring without duplicate characters, https://cinish.medium.com/data-structures-longest-substring-without-duplicate-characters-6ee91772602d 22. Which algorithm is faster O(N) or O(2N)? - Stack Overflow, https://stackoverflow.com/questions/25777714/which-algorithm-is-faster-on-or-o2n 23. Backtracking Algorithm: Explained With Examples - WsCube Tech, https://www.wscubetech.com/resources/dsa/backtracking-algorithm 24. Backtracking Algorithm - GeeksforGeeks, https://www.geeksforgeeks.org/dsa/backtracking-algorithms/ 25. Chanda-Abdul/Several-Coding-Patterns-for-Solving-Data-Structures-and-Algorithms-Problems-during-Interviews - GitHub, https://github.com/Chanda-Abdul/Several-Coding-Patterns-for-Solving-Data-Structures-and-Algorithms-Problems-during-Interviews 26. Dynamic Programming Tutorial Part 1 | Memoization & Tabulation | Algorithms - YouTube, https://www.youtube.com/watch?v=o8Dt08b5k1Y 27. Dynamic programming - Wikipedia, https://en.wikipedia.org/wiki/Dynamic\_programming 28. Dynamic Programming: Mastering Tabulation and Memoization – AlgoCademy Blog, https://algocademy.com/blog/dynamic-programming-mastering-tabulation-and-memoization/ 29. Mastering Dynamic Programming: A Comprehensive Guide - Discuss - LeetCode, https://leetcode.com/discuss/study-guide/4677772/Mastering-Dynamic-Programming%3A-A-Comprehensive-Guide/ 30. Approach for Dynamic Programming Problems : r/leetcode - Reddit, https://www.reddit.com/r/leetcode/comments/1d4mro0/approach\_for\_dynamic\_programming\_problems/ 31. Tree traversal - Wikipedia, https://en.wikipedia.org/wiki/Tree\_traversal 32. When to use BFS vs DFS in Graphs? : r/leetcode - Reddit, https://www.reddit.com/r/leetcode/comments/uhiitd/when\_to\_use\_bfs\_vs\_dfs\_in\_graphs/ 33. Binary Search Tree Traversal – Inorder, Preorder, Post Order for BST - freeCodeCamp, https://www.freecodecamp.org/news/binary-search-tree-traversal-inorder-preorder-post-order-for-bst/ 34. Methods of Depth First Traversal and Their Applications | Baeldung on Computer Science, https://www.baeldung.com/cs/depth-first-traversal-methods 35. BFS and its variations - Discuss - LeetCode, https://leetcode.com/discuss/study-guide/1833581/bfs-and-its-variations/ 36. Multi-source BFS: The Ultimate Guide - HeyCoach, https://heycoach.in/blog/multi-source-bfs/ 37. Leetcode Pattern 1 | DFS + BFS == 25% of the problems — part 2 | by csgator - Medium, https://medium.com/leetcode-patterns/leetcode-pattern-2-dfs-bfs-25-of-the-problems-part-2-a5b269597f52 38. Topological Sort (Using BFS & DFS) - Discuss - LeetCode, https://leetcode.com/discuss/study-guide/3759433/Topological-Sort-(Using-BFS-and-DFS)/ 39. Graph Series : Part 1 union find - Discuss - LeetCode, https://leetcode.com/discuss/career/5876587/Graph-Series-%3A-Part-1-union-find/ 40. Disjoint Set Union (DSU)/Union-Find - A Complete Guide - Discuss - LeetCode, https://leetcode.com/discuss/general-discussion/1072418/Disjoint-Set-Union-(DSU)Union-Find-A-Complete-Guide/ 41. Union By Rank and Path Compression in Union-Find Algorithm - Tutorials Point, https://www.tutorialspoint.com/union-by-rank-and-path-compression-in-union-find-algorithm 42. Disjoint Set | Union by Rank | Union by Size | Path Compression: G-46 - takeUforward, https://takeuforward.org/data-structure/disjoint-set-union-by-rank-union-by-size-path-compression-g-46/ 43. Binary Search on Answer Tutorial with Problems - GeeksforGeeks, https://www.geeksforgeeks.org/dsa/binary-search-on-answer-tutorial-with-problems/ 44. Binary Search - USACO Guide, https://usaco.guide/silver/binary-search 45. Monotonic Stack - Guide + List of Problems - Discuss - LeetCode, https://leetcode.com/discuss/study-guide/5148505/Monotonic-Stack-Guide-%2B-List-of-Problems/ 46. Monotonic Stack in 6 minutes | LeetCode Pattern - YouTube, https://www.youtube.com/watch?v=DtJVwbbicjQ 47. Coding Patterns: Top K Numbers - emre.me, https://emre.me/coding-patterns/top-k-numbers/ 48. Top K problems - Sort, Heap, and QuickSelect - Discuss - LeetCode, https://leetcode.com/discuss/general-discussion/1088565/top-k-problems-sort-heap-and-quickselect/