- Master the syntax and usage patterns for Java's core data structures including arrays, strings, collections, and trees
- Understand time and space complexity trade-offs when choosing between different data structure implementations
- Apply appropriate data structures to solve common algorithmic patterns like two-pointers, sliding window, and graph traversals
- Write efficient, idiomatic Java code using the Collections Framework and standard library utilities
Every Java developer hits that moment during an interview or coding challenge where they blank on the exact syntax for a
PriorityQueue comparator or forget whether HashMap's getOrDefault takes the key or value first. You know the concept
cold—you just can't remember the exact method signature when the pressure's on.
This cheat sheet exists because memorizing every API detail is impossible, but having quick reference to the patterns you actually use matters. Think of this as your technical companion for interviews and rapid development—the syntax you reach for when implementing solutions, not the theory behind why these structures work. We're covering primitives through graphs, with executable code examples that follow Java conventions you can actually compile and run.
The goal isn't encyclopedic coverage. It's giving you the specific methods, patterns, and gotchas that show up
repeatedly in algorithm problems. Each section includes working code with proper imports, time complexity notes, and the
practical details that textbooks skip—like how Arrays.binarySearch returns negative values for missing elements, or
why you'd choose computeIfAbsent over manual null checks.
Java's eight primitive types form the foundation, but collections require their wrapper classes through autoboxing:
| Primitive | Size | Wrapper | Range |
|---|---|---|---|
byte |
1 byte | Byte |
-128 to 127 |
short |
2 bytes | Short |
-32,768 to 32,767 |
int |
4 bytes | Integer |
-2³¹ to 2³¹-1 |
long |
8 bytes | Long |
-2⁶³ to 2⁶³-1 |
float |
4 bytes | Float |
single-precision |
double |
8 bytes | Double |
double-precision |
char |
2 bytes | Character |
Unicode characters |
boolean |
1 bit* | Boolean |
true/false |
*Actual storage is JVM-dependent, often a full byte.
Autoboxing converts primitives to objects automatically when using collections, but it carries performance overhead. For tight loops processing millions of integers, primitive arrays outperform ArrayList significantly.
Arrays provide contiguous memory with O(1) access but fixed size after creation. They're your default choice for problems with known dimensions or when you need guaranteed constant-time indexing.
package blog.academy.javapro;
import java.util.Arrays;
public class ArrayOperations {
public static void main(String[] args) {
// Declaration and initialization
int[] numbers = {5, 3, 8, 1, 9};
int[][] matrix = new int[10][20]; // 10 rows, 20 columns
// Essential utility methods
Arrays.sort(numbers); // [1, 3, 5, 8, 9]
System.out.println(Arrays.toString(numbers));
int index = Arrays.binarySearch(numbers, 5); // 2 (must be sorted)
System.out.println("Found 5 at index: " + index);
// binarySearch returns negative insertion point - 1 if not found
int missing = Arrays.binarySearch(numbers, 4);
System.out.println("Search for 4: " + missing); // -3
// Fill and copy operations
Arrays.fill(numbers, 7);
int[] extended = Arrays.copyOf(numbers, 10); // Pads with zeros
// Two-dimensional iteration
for (int i = 0; i < matrix.length; i++) {
for (int j = 0; j < matrix[i].length; j++) {
matrix[i][j] = i * j;
}
}
}
}Arrays shine in two-pointer problems and when you need predictable memory layout. Their fixed size becomes a limitation when dealing with dynamic data—that's where ArrayList steps in.
| Operation | Time Complexity |
|---|---|
| Access | O(1) |
| Search (unsorted) | O(n) |
| Search (sorted, binary) | O(log n) |
| Insert/Delete | O(n) |
Strings in Java are immutable objects backed by the string pool for literals. Every concatenation creates a new String object, making repeated operations expensive. StringBuilder solves this by providing a mutable sequence of characters that you can modify in place without creating new objects each time.
package blog.academy.javapro;
import java.util.Arrays;
public class StringManipulation {
public static void main(String[] args) {
String text = " Hello World ";
// Common operations
char first = text.charAt(2); // 'H'
int length = text.length();
String sub = text.substring(2, 7); // "Hello"
// Conversion to array for manipulation
char[] chars = text.toCharArray();
Arrays.sort(chars); // Modifying a copy
// Splitting and joining
String csv = "apple,banana,orange";
String[] fruits = csv.split(",");
String joined = String.join(" | ", fruits);
System.out.println(joined); // apple | banana | orange
// Case and whitespace
String cleaned = text.trim().toLowerCase();
boolean matches = cleaned.equals("hello world");
// Search operations
int pos = text.indexOf("World"); // 8
int lastPos = text.lastIndexOf("l"); // 11
// StringBuilder for efficient concatenation
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 1000; i++) {
sb.append(i).append(",");
}
String result = sb.toString();
// Useful StringBuilder methods
sb = new StringBuilder("Java");
sb.insert(4, " Programming"); // "Java Programming"
sb.delete(4, 16); // "Java"
sb.reverse(); // "avaJ"
}
}The string pool optimization means "hello" == "hello" can be true for literals, but always use .equals() for
comparison. Reference equality with == checks object identity, not content. Use String for read-only or lightly
modified text, and StringBuilder when you need many modifications like appending or reversing in loops.
| Operation | String | StringBuilder |
|---|---|---|
| Concatenation | O(n) per operation | O(1) amortized |
| Memory | Creates new object | Modifies in place |
| Thread-safe | Yes (immutable) | No |
| Use case | Keys, constants | Building strings in loops |
HashMap provides average O(1) access through hash-based lookups. It's the workhorse for frequency counting, caching, and two-sum patterns. No duplicate keys are allowed—each key maps to exactly one value.
package blog.academy.javapro;
import java.util.HashMap;
import java.util.Map;
public class HashMapPatterns {
public static void main(String[] args) {
Map<String, Integer> frequencies = new HashMap<>();
// Basic operations
frequencies.put("apple", 3);
frequencies.put("banana", 2);
// getOrDefault prevents null checks
int count = frequencies.getOrDefault("orange", 0);
frequencies.put("orange", count + 1);
// computeIfAbsent for lazy initialization
frequencies.computeIfAbsent("grape", k -> 0);
// computeIfPresent for conditional updates
frequencies.computeIfPresent("apple", (k, v) -> v + 1);
// Iteration patterns
for (Map.Entry<String, Integer> entry : frequencies.entrySet()) {
System.out.println(entry.getKey() + ": " + entry.getValue());
}
// Keys and values separately
for (String key : frequencies.keySet()) {
System.out.println("Key: " + key);
}
for (Integer value : frequencies.values()) {
System.out.println("Value: " + value);
}
// Containment checks
boolean hasApple = frequencies.containsKey("apple");
boolean hasValue2 = frequencies.containsValue(2);
// Replace operations
frequencies.replace("banana", 5);
frequencies.replace("apple", 4, 10); // Only if current value is 4
}
}HashMap uses object hashCode() and equals() for key comparison. Mutable objects as keys are dangerous—if you modify a key after insertion, you'll never find it again. Worst-case time degrades to O(n) with hash collisions, resolved through chaining or tree-based bins in Java 8+.
| Operation | Average Time | Worst Case |
|---|---|---|
| Get/Put/Remove | O(1) | O(n) |
| ContainsKey | O(1) | O(n) |
| Iteration | O(n) | O(n) |
HashSet wraps HashMap internally, storing elements as keys with a dummy value. Use it for deduplication and membership testing. It doesn't maintain insertion order—if you need that, use LinkedHashSet instead.
package blog.academy.javapro;
import java.util.HashSet;
import java.util.Set;
import java.util.Arrays;
public class HashSetOperations {
public static void main(String[] args) {
Set<String> unique = new HashSet<>();
// Adding elements
unique.add("first");
unique.add("second");
unique.add("first"); // Ignored, already present
System.out.println(unique.size()); // 2
// Set operations
Set<Integer> setA = new HashSet<>(Arrays.asList(1, 2, 3, 4));
Set<Integer> setB = new HashSet<>(Arrays.asList(3, 4, 5, 6));
Set<Integer> union = new HashSet<>(setA);
union.addAll(setB); // {1, 2, 3, 4, 5, 6}
Set<Integer> intersection = new HashSet<>(setA);
intersection.retainAll(setB); // {3, 4}
Set<Integer> difference = new HashSet<>(setA);
difference.removeAll(setB); // {1, 2}
// Existence check
boolean contains = unique.contains("second");
}
}| Operation | Average Time |
|---|---|
| Add/Remove | O(1) |
| Contains | O(1) |
| Iteration | O(n) |
ArrayList provides resizable arrays with amortized O(1) append. It's the default List implementation unless you need
frequent insertions at arbitrary positions. The backing array grows by 50% when capacity is exceeded, so pre-size
with new ArrayList<>(capacity) if you know the final size.
package blog.academy.javapro;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Arrays;
public class ArrayListUsage {
public static void main(String[] args) {
List<Integer> numbers = new ArrayList<>();
// Adding elements
numbers.add(10);
numbers.add(20);
numbers.add(1, 15); // Insert at index 1
// Access and modification
int value = numbers.get(0);
numbers.set(0, 100);
// Removal
numbers.remove(1); // By index
numbers.remove(Integer.valueOf(20)); // By object
// Search operations
int index = numbers.indexOf(100);
boolean exists = numbers.contains(100);
// Sublists (views, not copies)
List<Integer> sub = numbers.subList(0, 2);
// Sorting
Collections.sort(numbers); // Ascending
Collections.sort(numbers, Collections.reverseOrder()); // Descending
// Iteration
for (int i = 0; i < numbers.size(); i++) {
System.out.println(numbers.get(i));
}
for (int num : numbers) {
System.out.println(num);
}
// Bulk operations
numbers.clear();
numbers.addAll(Arrays.asList(1, 2, 3, 4, 5));
}
}| Operation | Time Complexity |
|---|---|
| Get/Set | O(1) |
| Add (end) | O(1) amortized |
| Add (middle) | O(n) |
| Remove | O(n) |
| Search | O(n) |
PriorityQueue implements a min-heap by default, where the smallest element is always at the head. Elements are dequeued
in sorted order, making it perfect for "kth largest" problems and task scheduling. For a max-heap,
use Collections.reverseOrder() as a comparator.
package blog.academy.javapro;
import java.util.PriorityQueue;
import java.util.Collections;
import java.util.Map;
import java.util.HashMap;
public class PriorityQueueExamples {
public static void main(String[] args) {
// Min-heap (default)
PriorityQueue<Integer> minHeap = new PriorityQueue<>();
// Max-heap
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(Collections.reverseOrder());
// Adding elements
minHeap.add(10);
minHeap.add(5);
minHeap.add(20);
// Peek returns smallest without removing
System.out.println(minHeap.peek()); // 5
// Poll removes and returns smallest
System.out.println(minHeap.poll()); // 5
System.out.println(minHeap.poll()); // 10
// Custom comparator for pairs
Map<String, Integer> data = new HashMap<>();
data.put("apple", 5);
data.put("banana", 3);
data.put("cherry", 5);
PriorityQueue<Map.Entry<String, Integer>> pq = new PriorityQueue<>((a, b) -> {
// Sort by value, then by key
if (a.getValue().equals(b.getValue())) {
return a.getKey().compareTo(b.getKey());
}
return a.getValue() - b.getValue();
});
pq.addAll(data.entrySet());
while (!pq.isEmpty()) {
Map.Entry<String, Integer> entry = pq.poll();
System.out.println(entry.getKey() + ": " + entry.getValue());
}
}
}PriorityQueue doesn't support direct indexing or efficient random removal. The heap property only guarantees the top element is minimum—iteration order is undefined.
| Operation | Time Complexity |
|---|---|
| Insert | O(log n) |
| Peek | O(1) |
| Poll | O(log n) |
| Remove (arbitrary) | O(n) |
Queue implements a FIFO (first-in-first-out) structure using LinkedList for efficient head/tail operations. Stack provides LIFO (last-in-first-out) access but extends Vector with synchronization overhead—prefer ArrayDeque for better single-threaded performance in modern code.
package blog.academy.javapro;
import java.util.Queue;
import java.util.LinkedList;
import java.util.Stack;
import java.util.Deque;
import java.util.ArrayDeque;
public class QueueStackOperations {
public static void main(String[] args) {
// Queue operations (FIFO)
Queue<Integer> queue = new LinkedList<>();
queue.add(1);
queue.offer(2); // Returns false if capacity-constrained
System.out.println(queue.peek()); // 1, doesn't remove
System.out.println(queue.poll()); // 1, removes
System.out.println(queue.element()); // Throws exception if empty
// Stack operations (LIFO)
Stack<Integer> stack = new Stack<>();
stack.push(10);
stack.push(20);
System.out.println(stack.peek()); // 20
System.out.println(stack.pop()); // 20
System.out.println(stack.search(10)); // 1-based position from top
// Modern approach: Deque as stack
Deque<Integer> deque = new ArrayDeque<>();
deque.push(1);
deque.push(2);
System.out.println(deque.pop()); // 2
// Deque as queue
deque.addLast(3);
deque.addLast(4);
System.out.println(deque.removeFirst()); // 3
}
}Queue's add() throws exceptions on failure while offer() returns false—this distinction matters for bounded queues.
| Operation | Time Complexity |
|---|---|
| Add/Offer | O(1) |
| Remove/Poll | O(1) |
| Peek | O(1) |
| Push/Pop (Stack) | O(1) |
LinkedList excels at head/tail operations but suffers on random access. It implements both List and Deque interfaces, making it versatile for queue and deque operations. Choose LinkedList only when you need constant-time insertions at both ends—for most use cases, ArrayList's cache locality and O(1) indexing outweigh the O(n) cost of occasional insertions.
package blog.academy.javapro;
import java.util.LinkedList;
public class LinkedListOperations {
public static void main(String[] args) {
LinkedList<String> list = new LinkedList<>();
// List interface methods
list.add("middle");
list.addFirst("start");
list.addLast("end");
// Deque operations
list.offerFirst("new start");
list.offerLast("new end");
// Access
System.out.println(list.getFirst());
System.out.println(list.getLast());
System.out.println(list.get(2)); // O(n) traversal
// Removal
list.removeFirst();
list.removeLast();
list.remove(1); // By index
// Update
list.set(0, "updated");
// Search
int index = list.indexOf("middle");
boolean contains = list.contains("end");
}
}| Operation | Time Complexity |
|---|---|
| AddFirst/AddLast | O(1) |
| RemoveFirst/RemoveLast | O(1) |
| Get/Set by index | O(n) |
| Search | O(n) |
Binary trees appear everywhere—BSTs for sorted data, heaps for priority management, expression trees for parsing. Tree problems test recursion and structural thinking. The key distinction between traversal types is when you process the root node relative to its children.
package blog.academy.javapro;
import java.util.Queue;
import java.util.LinkedList;
import java.util.List;
import java.util.ArrayList;
class TreeNode {
int val;
TreeNode left, right;
TreeNode(int val) {
this.val = val;
}
}
class NaryNode {
int val;
List<NaryNode> children;
NaryNode(int val) {
this.val = val;
this.children = new ArrayList<>();
}
}
public class TreeTraversals {
// DFS: Preorder (Root -> Left -> Right)
public void preorder(TreeNode node) {
if (node == null) return;
System.out.print(node.val + " ");
preorder(node.left);
preorder(node.right);
}
// DFS: Inorder (Left -> Root -> Right) - gives sorted order for BST
public void inorder(TreeNode node) {
if (node == null) return;
inorder(node.left);
System.out.print(node.val + " ");
inorder(node.right);
}
// DFS: Postorder (Left -> Right -> Root)
public void postorder(TreeNode node) {
if (node == null) return;
postorder(node.left);
postorder(node.right);
System.out.print(node.val + " ");
}
// BFS: Level-order traversal
public void bfs(TreeNode root) {
if (root == null) return;
Queue<TreeNode> queue = new LinkedList<>();
queue.offer(root);
while (!queue.isEmpty()) {
int levelSize = queue.size();
for (int i = 0; i < levelSize; i++) {
TreeNode node = queue.poll();
System.out.print(node.val + " ");
if (node.left != null) queue.offer(node.left);
if (node.right != null) queue.offer(node.right);
}
System.out.println(); // Separate levels
}
}
public static void main(String[] args) {
TreeNode root = new TreeNode(1);
root.left = new TreeNode(2);
root.right = new TreeNode(3);
root.left.left = new TreeNode(4);
root.left.right = new TreeNode(5);
TreeTraversals traversals = new TreeTraversals();
System.out.print("Preorder: ");
traversals.preorder(root);
System.out.println();
System.out.print("Inorder: ");
traversals.inorder(root);
System.out.println();
System.out.print("Postorder: ");
traversals.postorder(root);
System.out.println();
System.out.println("BFS:");
traversals.bfs(root);
}
}BFS guarantees shortest path in unweighted trees. DFS suits backtracking and path problems. Where h = height and w = max width in the table below.
| Traversal | Use Case | Space | Pattern |
|---|---|---|---|
| Preorder | Tree copying, prefix expressions | O(h) | Root first |
| Inorder | BST sorted order | O(h) | Left-Root-Right |
| Postorder | Tree deletion, postfix expressions | O(h) | Children first |
| BFS | Level-order, shortest path | O(w) | Queue-based |
Graphs model networks, dependencies, and relationships. Adjacency lists scale better than matrices for sparse graphs, which covers most real-world cases. Use a HashMap to map each vertex to its list of neighbors for flexible vertex naming.
package blog.academy.javapro;
import java.util.*;
class Graph {
private Map<Integer, List<Integer>> adjList = new HashMap<>();
public void addEdge(int src, int dest, boolean undirected) {
adjList.putIfAbsent(src, new ArrayList<>());
adjList.putIfAbsent(dest, new ArrayList<>());
adjList.get(src).add(dest);
if (undirected) {
adjList.get(dest).add(src);
}
}
public void bfs(int start) {
Queue<Integer> queue = new LinkedList<>();
Set<Integer> visited = new HashSet<>();
queue.add(start);
visited.add(start);
while (!queue.isEmpty()) {
int vertex = queue.poll();
System.out.print(vertex + " ");
for (int neighbor : adjList.getOrDefault(vertex, new ArrayList<>())) {
if (!visited.contains(neighbor)) {
visited.add(neighbor);
queue.add(neighbor);
}
}
}
}
public void dfs(int vertex, Set<Integer> visited) {
if (visited.contains(vertex)) return;
visited.add(vertex);
System.out.print(vertex + " ");
for (int neighbor : adjList.getOrDefault(vertex, new ArrayList<>())) {
dfs(neighbor, visited);
}
}
public static void main(String[] args) {
Graph g = new Graph();
g.addEdge(0, 1, true);
g.addEdge(0, 2, true);
g.addEdge(1, 3, true);
g.addEdge(2, 3, true);
System.out.print("BFS from 0: ");
g.bfs(0);
System.out.println();
System.out.print("DFS from 0: ");
g.dfs(0, new HashSet<>());
System.out.println();
}
}Lists win for sparse graphs where E << V². Matrices suit dense graphs or when you need O(1) edge existence checks.
| Type | Space | Edge Check | Neighbor Iteration |
|---|---|---|---|
| Adjacency List | O(V + E) | O(degree) | O(degree) |
| Adjacency Matrix | O(V²) | O(1) | O(V) |
Dijkstra's algorithm finds shortest paths in weighted graphs with non-negative edges. It maintains a priority queue of nodes sorted by distance, processing the closest unvisited node first and updating neighbor distances when shorter paths are found. Union-Find detects cycles and manages connected components efficiently using path compression and rank-based union for nearly constant-time operations.
package blog.academy.javapro;
import java.util.*;
class Pair<K, V> {
private K key;
private V value;
public Pair(K key, V value) {
this.key = key;
this.value = value;
}
public K getKey() {
return key;
}
public V getValue() {
return value;
}
}
class GraphAlgorithms {
// Dijkstra's shortest path
public Map<Integer, Integer> dijkstra(Map<Integer, List<Pair<Integer, Integer>>> adj, int start) {
PriorityQueue<Pair<Integer, Integer>> pq = new PriorityQueue<>(
Comparator.comparingInt(Pair::getValue)
);
Map<Integer, Integer> dist = new HashMap<>();
pq.offer(new Pair<>(start, 0));
dist.put(start, 0);
while (!pq.isEmpty()) {
Pair<Integer, Integer> current = pq.poll();
int node = current.getKey();
int d = current.getValue();
if (d > dist.getOrDefault(node, Integer.MAX_VALUE)) continue;
for (Pair<Integer, Integer> neighbor : adj.getOrDefault(node, new ArrayList<>())) {
int nextNode = neighbor.getKey();
int weight = neighbor.getValue();
int newDist = d + weight;
if (newDist < dist.getOrDefault(nextNode, Integer.MAX_VALUE)) {
dist.put(nextNode, newDist);
pq.offer(new Pair<>(nextNode, newDist));
}
}
}
return dist;
}
public static void main(String[] args) {
Map<Integer, List<Pair<Integer, Integer>>> graph = new HashMap<>();
graph.put(0, Arrays.asList(new Pair<>(1, 4), new Pair<>(2, 1)));
graph.put(1, Arrays.asList(new Pair<>(3, 1)));
graph.put(2, Arrays.asList(new Pair<>(1, 2), new Pair<>(3, 5)));
GraphAlgorithms algo = new GraphAlgorithms();
Map<Integer, Integer> distances = algo.dijkstra(graph, 0);
distances.forEach((node, dist) ->
System.out.println("Distance to " + node + ": " + dist)
);
}
}
class UnionFind {
private int[] parent;
private int[] rank;
public UnionFind(int n) {
parent = new int[n];
rank = new int[n];
for (int i = 0; i < n; i++) {
parent[i] = i;
}
}
public int find(int x) {
if (parent[x] != x) {
parent[x] = find(parent[x]); // Path compression
}
return parent[x];
}
public void union(int x, int y) {
int px = find(x);
int py = find(y);
if (px == py) return;
if (rank[px] < rank[py]) {
parent[px] = py;
} else if (rank[px] > rank[py]) {
parent[py] = px;
} else {
parent[py] = px;
rank[px]++;
}
}
public static void main(String[] args) {
UnionFind uf = new UnionFind(5);
uf.union(0, 1);
uf.union(1, 2);
System.out.println("0 and 2 connected: " + (uf.find(0) == uf.find(2)));
System.out.println("0 and 3 connected: " + (uf.find(0) == uf.find(3)));
}
}Where α(n) is the inverse Ackermann function—effectively constant for all practical purposes.
| Algorithm | Time | Space | Use Case |
|---|---|---|---|
| Dijkstra | O((V+E) log V) | O(V) | Shortest path, non-negative weights |
| Union-Find | O(α(n)) per op | O(n) | Cycle detection, MST, connected components |
These patterns appear repeatedly in interview problems. Recognizing them saves you from reinventing solutions.
Two pointers work well for palindrome checks and array partitioning. Sliding window handles dynamic range problems like subarray sums. Frequency counting with HashMap solves character or element occurrence questions. Monotonic stack finds next greater or smaller elements efficiently.
package blog.academy.javapro;
import java.util.HashMap;
import java.util.Map;
import java.util.Stack;
public class CommonPatterns {
// Two Pointers: Palindrome check
public boolean isPalindrome(String s) {
int left = 0, right = s.length() - 1;
while (left < right) {
if (s.charAt(left++) != s.charAt(right--)) {
return false;
}
}
return true;
}
// Sliding Window: Max sum subarray of size k
public int maxSum(int[] arr, int k) {
int sum = 0, maxSum = 0;
for (int i = 0; i < k; i++) sum += arr[i];
maxSum = sum;
for (int i = k; i < arr.length; i++) {
sum += arr[i] - arr[i - k];
maxSum = Math.max(maxSum, sum);
}
return maxSum;
}
// Frequency Counting: Character frequency map
public Map<Character, Integer> frequencies(String s) {
Map<Character, Integer> freq = new HashMap<>();
for (char c : s.toCharArray()) {
freq.put(c, freq.getOrDefault(c, 0) + 1);
}
return freq;
}
// Monotonic Stack: Next greater element
public int[] nextGreater(int[] arr) {
int[] result = new int[arr.length];
Stack<Integer> stack = new Stack<>();
for (int i = arr.length - 1; i >= 0; i--) {
while (!stack.isEmpty() && stack.peek() <= arr[i]) {
stack.pop();
}
result[i] = stack.isEmpty() ? -1 : stack.peek();
stack.push(arr[i]);
}
return result;
}
public static void main(String[] args) {
CommonPatterns patterns = new CommonPatterns();
System.out.println(patterns.isPalindrome("racecar"));
System.out.println(patterns.maxSum(new int[]{1, 4, 2, 10, 23, 3, 1, 0, 20}, 4));
}
}This cheat sheet covers the data structures you'll reach for in coding interviews—from primitives through graphs. The key insight is matching the problem's access patterns to the right structure: constant-time lookups suggest HashMap, order maintenance points to TreeMap, and priority-based processing calls for PriorityQueue.
Time complexity matters. An O(n²) solution might pass small test cases but fail at scale. Understanding when HashMap degrades to O(n) or why ArrayList's O(1) append is amortized helps you reason about performance. The Collections Framework provides well-tested implementations—don't reimplement TreeMap unless you're proving understanding.
Algorithm patterns like two-pointers and sliding window solve entire categories of problems. Recognizing that a "longest substring" question fits the sliding window pattern saves the time you'd spend deriving a solution from scratch. Graph traversals—BFS for shortest paths, DFS for cycle detection—follow predictable templates once you understand the fundamental structures.
Java's type system and library design encourage clear interfaces. List abstracts over ArrayList and LinkedList, Map over HashMap and TreeMap. Writing to interfaces keeps your code flexible while specific implementations optimize for your actual usage—random access versus sequential scanning, sorted order versus insertion order. The syntax becomes second nature through practice, but this reference exists for the exact method signature when you need it under pressure.
Java Data Structures and Algorithms Cheat Sheet. Last updated January 11, 2026.
Join our Java Bootcamp to master enterprise-level development, or start with our free Core Java course to build your foundation.